A digital twin is a digital replica of some physical entity, such as a person, a device, manufacturing equipment, or even planes and cars. The idea is to provide a real-time simulation of a physical asset or human to determine when problems are likely to occur and to proactively fix them before they actually arise.

Although the roles of digital twins vary a great deal, the connection is established using real-time data from sensors that are able to sync the digital twin living in a virtual world with the physical twin that we can actually touch. This new synchronized simulation leverages IoT (Internet of Things), AI (artificial intelligence), machine learning, and analytics with spatial graphics to create a working simulation of the model that updates as the physical entity changes. 

There are all types of use cases for digital twins; the most common is a digital twin to represent machines, such as factory equipment and robotics. The twin simulates when proactive maintenance should occur, and if implemented properly should provide better machine productivity and uptime.

At issue is that most digital twins exist in public clouds, for the obvious reason that they are much cheaper to run and can access all of the cloud’s storage and processing, as well as special services such as AI and analytics, to support the twin. Moreover, the cloud offers purpose-built services for creating and running twins. 

The ease of building, provisioning, and deploying twins has led to a few issues where the digital twin becomes the evil twin and does more harm than good. Some of the examples that I’ve seen include:

  • In manufacturing, the twin over- or understates the proactivity needed to fix issues before they become real problems. Companies are fixing things identified in the twin simulation that actually don’t need to be fixed. For example, they replace hydraulic fluid three times more often than needed in a factory robot. Or worse, the twin suggests configuration changes that result in overheating and a fire. That last one really happened.
  • In the transportation industry, a digital twin could shut down a jet engine due to what the twin simulated to be a fire but turned out to be a faulty sensor.
  • In the world of healthcare, a patient was indicated as having factors that would likely lead to a stroke but it was determined to be a problem with the predictive analytics model.

The point I’m making is that attaching simulations to real devices, machines, and even humans has a great deal of room for error. Most of these can be tracked back to the people creating twins for the first time who don’t find their mistakes until shortly after deployment. The problem I have is that you could crash an airplane, scare the hell out of a patient, or have a factory robot go aflame.

With the cloud making the use of digital twins much more affordable and speedier, I see issues like these increasing. Perhaps the problems aren’t evil, but they’re certainly avoidable.