The Internet of Things provides us with lots of sensor data. However, the data by itself does not provide value unless we can turn it into actionable, contextualized information. Big data and data visualization techniques allow us to gain new insights through batch-processing and off-line analysis. Real-time sensor data analysis and decision-making is often done manually, but to make it scale-able, it is preferable that it is automated.
Artificial intelligence provides us with the framework and tools to go beyond trivial real-time decision and automation use cases for IoT, or as Gartner describes it here:
“In order to operate in real time, companies must leverage predefined analytical models, rather than ad-hoc models, and use current input data rather than just historical data.”
In that respect, we can see IoT evolving through these three phases:
Analyzing and visualizing the collected data
Deep learning has fascinating potential, solving various non-linear, multi-dimensional problems. This one below, applied to chaos systems, is my absolute favorite these days. Chaos systems are notoriously hard to predict due to their inherited sensitivity to initial inputs (even though this was achieved by reservoir computing, which is a little bit different):
Similarly, deep learning is superior to other methods when it comes to sensory discovery/enhancement sort of problems: find machine anomalies by looking into historical records, NLP, or finding the cancer by searching through MRI scans.
Deep learning is also well suited in the domain of reinforced learning, when the problem space is well-defined and the environment is known and stable, such as a Go or chess game:
Here are some great slides on this topic by Pieter Abbeel:
With the latest advances in deep neural networks, companies flock to deep learning for process automation, such as predictive maintenance. Over time, they naturally become more ambitious and try to apply deep learning for automation across all building blocks of IoT. But, that’s where problems start appearing…
One of the biggest arguments about deep learning is whether it is “deep” enough — whether DL systems can learn high-level abstractions about the world around them (which I would call infer models from data). And here, I don’t mean abstractions such as figuring out that a group of pixels in the image are the eyes of a cat, or the tail of an elephant, but rather the context in which these discovered objects interact with an environment.
For instance, if some people tell you that the only reason neural networks mislabeled sheep for birds or giraffes is due to the missing data set with similar pictures, they are missing the point, as shown in this funny post:
There is no better way to describe the current problem with deep learning in the domain of decision making than the excerpt from this post where a team at the University of Pittsburgh Medical Center used machine learning to predict whether pneumonia patients might develop severe complications.
“The goal was to send patients at low risk for complications to outpatient treatment, preserving hospital beds and the attention of medical staff. The team tried several different methods, including various kinds of neural networks, as well as software-generated decision trees that produced clear, human-readable rules.
The neural networks were right more often than any of the other methods. But when the researchers and doctors took a look at the human-readable rules, they noticed something disturbing: One of the rules instructed doctors to send home pneumonia patients who already had asthma, despite the fact that asthma sufferers are known to be extremely vulnerable to complications. The model did what it was told to do: Discover a true pattern in the data. The poor advice it produced was the result of a quirk in that data. It was hospital policy to send asthma sufferers with pneumonia to intensive care, and this policy worked so well that asthma sufferers almost never developed severe complications. Without the extra care that had shaped the hospital’s patient records, outcomes could have been dramatically different.”
This is what we call the “explainability” problem, or as Michael Jordan puts it:
“We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness.”
There is huge research ongoing in trying to fix that problem, and this is a great lecture on this topic: “Bringing deep learning to higher-level cognition” by Yoshua Bengio.
In my next blog, I will discuss how Waylay solves this problem by combining deep learning with a Bayesian inference engine, so stay tuned!