Making a digital model of a physical object has been done for decades now. From models of a bridge, a simulation of a thunderstorm, or a map of neuron firings, digital models have been helping us make sense of complex phenomena in the real world in a controlled, virtual environment. But what if we could make them better? What if the model could make changes to the real world in real-time based on its predictions? This is the basis for digital twins.
What is a Digital Twin?
A digital twin is a virtual model of a physical object or process that receives data, makes updates, and sends predictions or controls to the physical thing in real time. NASA was the first to use a digital twin by creating one for a spacecraft in 2010. Digital twins are useful for mimicking these kinds of complex objects that contain many subprocesses. There are many examples where digital twins would make a useful addition to managing and maintaining these complex physical objects. If we place 10 sensors on a bridge that measure vibrations throughout the day and send this information to the digital twin every minute, then the digital twin could make predictions on weak points on the bridge and suggest where to reinforce the metal to ensure the structure remains stable. Once these changes are made, the sensors will pick up the changes on the real bridge and send these updates to the digital twin which will then update its predictions. We can also use the digital twin to simulate changes before we add them to the real object to determine how the object will react. Perhaps if we try to add concrete to a crack in the bridge it will expand and cause more cracks, which can be simulated before damaging changes are made to the real thing. This would be an efficient system for creating better bridges in the real world!
Aren't digital twins commonly used already?
Not really. We make virtual models for almost everything now, but these are not digital twins. The main difference is the digital twin is a closed-loop system. This means the information from the physical object is passed to the digital twin and the digital twin sends information back to the object in real time. Right now, most virtual models receive data (often not in real-time) but do not send information back. Usually, engineers and scientists receive the predictions and update the models themselves, instead of going directly to the object sensors. This seemingly very slight difference makes a lot of difference in complexity and necessary computational power. However, the potential payoff has made many researchers excited about the potential of digital twins.
How hard can it be?
Unfortunately, very. Many technicalities need to be solved before we're ready to deploy digital twins. Whenever we make a process more automated, there are more steps in which errors can be made without a person checking the validity of the updates. There is no precedence for these models, so how can we determine how good the digital twin is in terms of replicating the physical model? If the digital twin makes a real-time prediction that has high stakes(i.e. stopping an autonomous car because a pedestrian is crossing the street), how can we trust the model will make the right choice? What if a pedestrian crosses the street when they're not supposed to, so we still need our autonomous car to stop? We need our models to be able to handle rare events or outliers. This is very difficult to do because underneath the hood of machine learning is statistics. Machines make decisions based on how likely it will be that an outcome will happen given a certain input, making them very weak when it comes to unlikely events. We have to account for human error. What if someone types in a data point incorrectly or puts the sensor in the wrong place? If incorrect data goes in, incorrect predictions will be made. some sort of validity would need to be done before going to the digital twin. Along this train of thought, what's to prevent the physical model and digital twin from passing incorrect data back and forth that progressively gets worse and worse? How can we check and fix errors quickly? We need a way to quickly fix errors as we update things every minute, second, or milisecond before the model becomes unusable. Where do we hold all this data? Updating every second can become very computationally demanding. How can we efficiently save the data without completely crashing computers? These are some of the important questions that are kept in mind during the development of digital twins
What does this mean for the future?
The advancement of digital twins is an exciting new step in creating an integrated system between the physical and digital world. This technology could save lives by creating digital twins for humans that can determine the best method of treatment depending on a patients current readings. However, we need to be careful with how quickly we integrate this technology and who manages and stores all of this vital data.
Comments