The outlook for the new decade is not all rosy. The gloomy future scenarios include serious cyber attacks. Attacks on artificial intelligence and machine learning are likely to come into focus this year, which increases cyber risks.
Such attacks are difficult for companies to understand and open up new possibilities of sabotage for state-organized as well as criminal attackers. When implementing AI applications, companies should therefore ensure that the technology and data supply chains are fully secured from the start.
Table of Contents
Cyber risks: Artificial intelligence is becoming a new target
Artificial intelligence offers companies attractive and promising future prospects. However, this also means that cybercriminals and state-organized cyber attacks are increasingly focusing on AI. We already know promising attack scenarios on AI: For example, attackers can “contaminate” data sets with incorrect information, which are then used to train the machine learning algorithms in order to steer AI to wrong statements.
It is also possible for attackers to delete data in a targeted manner or to change statistical weightings in data sets in order to force AI to make the desired statement (so-called bias effect in AI).
Cyber risks to medical diagnostic procedures
These approaches are suitable, among other things, to sabotage AI-based medical diagnostic procedures. For example, attackers could aim to change the diagnosis of lung cancer using image recognition from imaging processes. If the training data is manipulated by AI, the incorrectly trained AI system tends to make incorrect diagnoses. Such a manipulated system makes statically increased wrong decisions – both “false positive” and “false negative”: Malignant tissue changes are then either not recognized as such, or the program transmits the diagnosis of lung cancer, although there are in fact no changes.
The wrong diagnoses mean serious health consequences for the patient and can seriously damage device manufacturers, health insurances, or users in economic terms. The same applies to AI systems as to the human radiologist: A diagnosis must be made by at least two independently trained AI systems. But what if both systems were trained with manipulated data?
Blackmail through manipulation of company data
There is a potentially high risk that the manipulation of data will also be used increasingly in the context of “data ransom” in the future. To do this, attackers break into databases and change data records. Big data makes this attack possible. Protection money can then be extorted for information about which data has been manipulated and how. The blackmail could, for example, concern technical data from the company that is used for quality control of products or for preventive maintenance.
A manipulation would result in considerable loss of quality or downtimes in production, with potentially enormously negative economic consequences for the system manufacturer or the manufacturing company. Then there is the immense pressure on the victims of blackmail. There are good prospects for criminal gangs to extort ransom – data-driven business models and processes could to a certain extent be “managed” by criminals.
Cyber risks: attacks at every point in the supply chain
At what point do the attackers gain access to the data? In principle, the manipulation can take place at any unsecured point in the supply chain, such as by manipulating the sensor. The intervention could take place directly on the sensor or by the sensor being disturbed by external factors. Scientists recently demonstrated that the microphones of digital assistants who work with voice commands, for example in autonomous vehicles, can be manipulated with a laser from a distance of up to 110 meters.
Theoretically, it is possible to train the AI in a “smart” car, to open the door to strangers, and to grant rights to the on-board system. Data can also be changed during transmission using a classic man-in-the-middle attack. The criminals access the data, manipulate it and then forward it to the target. Another possibility requires a break into the database system, in which the attacker purposefully changes the data stored there.
Common measures only offer basic protection
How can attacks on AI or machine learning be prevented? In order to reduce cyber risks and ward off attacks, you should first protect sensors and other data sources from physical access in the sense of basic security and update the software on the sensor side regularly in order to close security gaps. Unsafe hardware components must be replaced.
Data from sensors may only be permitted if they can be clearly identified – which means a zero-trust principle at the sensor level. In a further step, the data transmission paths must be secured using end-to-end encryption. Finally, basic IT protection should be introduced at the level of the server/database system and unauthorized access should be detected using intrusion detection.
The gold standard of data backup requires additional measures. The trustworthiness of the data must be traceable from the data source to its use in training machine learning. This is conceivable through the use of digital certificates across the entire supply chain, in which the integrity of data records can be validated using certificates. New, distributed cryptographic methods and data storage methods are used, such as blockchain (distributed ledger), in order to permanently store data and protect it using consensus principles.