We live in exciting times in which technological developments are opening the doors to services and devices never seen before. The Internet of Things (IoT) is one of these global technologies that interest us because of the possibilities they offer consumers, but at the same time, it is one of the most worries or doubts for ordinary people. To what extent can these technologies be trusted to be secure? Can they protect the integrity of our private data? Do we trust them?
This usually happens, to a greater or lesser extent, with each new technological wave. A relatively recent example is the use of online credit cards. The misgivings at the beginning of online payments (fueled by the cases of fraudulent websites, that’s true) raised an invisible barrier that prevented consumers from feeling safe when shopping online. The curious thing is that, over time, it was found that buying online is much safer than, for example, giving the card to a waiter in such a way that we lose sight of it for a few minutes. That same dilemma occurs today with mobile payments.
In the same way, the suspicion for IoT systems and devices is there, latent, fueled by the impression that any task that a human being performs will be safer or, seen differently, the impression that a machine learning system and Connected can have unexpected or uncontrolled crashes and therefore becomes an insecure system that is difficult to trust.
Some examples of these systems that raise suspicions and make us suspicious could be autonomous cars (which are connected cars and part of the IoT ecosystem); a device that measures the exact amount of medicines for each patient and moment, and administers them; a Smart home where everything is controlled by artificial intelligence …
Why this mistrust? On the one hand, they are technologies that are currently in full development. Autonomous cars, for example, are still just a possibility that will come in the near future. The current solutions are electric cars that have a certain level of automation and that can circulate, in certain ideal conditions, without a driver on limited stretches of road. The driver continues to exist as an active and alert figure that he should not leave the car “alone”, and the proof of this is in a recent accident involving the autonomous car involved. Those cases generate some mistrust, but ultimately we speak of human errors (since in level 3 autonomous cars the driver must remain alert and be prepared to take the controls in case of problems).
One of the biggest fears or mistrust of the consumer is that unexpected failures occur in the software that controls a self-driving car. That these unexpected failures can have fatal consequences for the occupants of the vehicles. Or that the vehicle’s locking system has an error that leaves the occupants locked for hours, or that the intelligent system communicates poorly with the connected traffic lights or with other vehicles …
The fears and possible cases of systems malfunction in the minds of the consumer are endless. In the same way, in drug delivery systems controlled by artificial intelligence, there is the main fear that can be summed up as mistrust: what if the “machine” measures poorly? Everything, in general, is based on the same principle: if you don’t know how it works, you distrust it. It is also possible that science fiction and dystopian futures created in so many well-known stories are to blame for this mistrust.
Be that as it may, there is a long way to go, and not only for the commercial or definitive deployment of IoT devices and systems and artificial intelligence but for public disclosure. Only through disclosure and a good communication strategy is it possible to break down the barrier of mistrust and fear of the unexpected.