Just a year ago, the Internet of Things (IoT) was all the buzz. Every analyst, vendor, and pundit was debating the massive potential of connecting to the Internet not millions but billions of machines and devices. After all, the idea that all machines will become smart, connected, and generate a lot of valuable data promises to revolutionize how we operate and service all those ‘things’. The IoT industry trend became so powerful that it conceived a slough of sub-trends, including Home Automation, Smart Cities, and Industry 4.0 - each with its own vision and ecosystem of vendors, experts, and conferences.
The obvious answer is that IoT was, just like many other over-hyped trends, ahead of its time. The adoption lags well behind vendors’ narrative and sometimes, the technology isn’t quite yet doing what the marketing messages promise. A few flat fallen POC projects can quickly pour cold water over a hot trend. However, there is something more fundamental about the IoT problem and it isn’t the complexity or the maturity of the technology.
The problem also doesn’t lie in a lack of awareness - in fact, most companies would kill for the level of buzz IoT has been getting as a category. IoT adoption is actually very rapid in the consumer world. Just count all the smart speakers, thermostats, switches, smoke detectors, doorbells, and cameras in your house.
The greatest challenge related to the IoT adoption in the industrial world has to do with the analysis of the data. With today’s state of technology, it’s relatively easy to connect the machines and collect petabytes of sensor data. Sure, most equipment out there doesn’t have any sensors nor connectivity but retrofitting this equipment with smart electronics is not that difficult and is less and less expensive. The problem lies in those huge volumes of data. What do we do with it?
This type of big data is rather expensive to store, manipulate and analyze. The expectation for IoT applications is to provide a real-time or near real-time analysis, which is not that simple given the massive data volumes. Many companies need to really spend time designing their data management architecture and in particular, decide what data should stay in the cloud and what should be stored on-premises. Yes, I am a cloud believer but not everything will happen in the cloud. The elasticity of the cloud is useful to handle workload peaks but the cost can add up very quickly. This is where a hybrid architecture can make a lot of sense.
Also, most of the data is only useful when viewed as a trend over time and storing and analyzing time series data is not trivial. Most traditional databases are designed to capture a value for each field while time-series databases need to capture multiple values for that field, each with a time stamp. Managing the timestamp/value pairings efficiently makes time stamp databases particularly useful for analyzing trends, which is critical for IoT applications. But such databases systems are often complex and expensive.
Finally, the trend analysis itself is perhaps the greatest challenge. Sure, the basic idea is simple: your sensor measures the temperature of a particular component and if that reaches a certain threshold, you sound an alarm. But let’s face it, this example is trivial. Your machine already does that without any IoT infrastructure - just think of all the warning lights in your car. To get some value out of your IoT investment, you need to raise the bar on the data analytics.
What you need is a digital model of your machine where you can analyze the machine holistically, combining data from multiple sensors, and examining how they influence each other. You can call it a digital twin, digital simulator, digital avatar, or cyber object - but you need it. You need to build this model to analyze your sensor data in a way that yields a recurring benefit that justifies the IoT investment. You will end up with a specific model for every type of machine and to build it, you need a data scientist but also someone who really, really understands the machine and its inner workings. And that’s the challenge. That’s why there are not that many digital twin models available.
The digital twin is not about just pointing at a machine learning system at a data lake to see what you can learn from the data. Sure, you will able to discover new, previously unknown patterns or relationships this way but that will likely yield a one-time benefit. For example, you might discover a particular vulnerability in a specific component, which can be extremely valuable. But once you have redesigned the part and fixed the problem, the IoT data no longer delivers value. The value of all that IoT investment was a one-time benefit. What you need is a recurring value because without recurring value, nobody will pay any recurring cost for your IoT solution.
Ultimately, the recurring benefit yields the ROI that justifies the substantial cost of your IoT investment. For example, the recurring benefit can come in the form of a predictive maintenance application that determines when and what type of service should be performed to prevent any unplanned downtime or performance degradation. Now, that can save a lot of money but only if you have a digital twin model that can make such predictions from all that IoT data.
I remain extremely bullish on IoT. The analysts are estimating that there are already 7 billion IoT devices worldwide. That’s more than PCs! IoT can bring the transformational power of the internet to a huge number of end-nodes, creating an amazing benefit. But we are not quite there yet.