This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Bad Robot: the rise of adversarial AI
As much as artificial intelligence and machine learning systems can help protect organisations from cyberattacks, the same technology can be used against them.
A relatively new threat on the cyber security block, adversarial AI is a catch-all title which covers both attacks by AI and attacks on AI systems.
The first type is malicious AI, or AI used as a weapon: it often takes the form of an AI-generated deepfake which is used to bypass manual or automated identity verification. It could also materialise as smart malware with evasive behaviour or personalise phishing.
Less obvious and often harder to detect, the second form of adversarial AI involves attacks against AI, or AI-as-victim. This involves the poisoning or disruption of data during the machine learning process that forms the basis of AI decision- making so that wrong conclusions are drawn.
Criminals working on this type of attack need to be able to access the AI datasets – either using a ransomware drop attached to an email or via a threat actor operating from within an organisation.
Once they obtain this dataset they can train a machine learning-based security system to – for instance – accept a malicious signature as benign so that it bypasses the security operation centre (SOC) team for attackers to exploit.
Attacks using AI
Israeli start up Adversa – cofounded by cyber security experts Alex Polyakov and Eugene Neelou – is one of a small but growing number of companies that are starting to address this next wave of AI vulnerabilities.
“There’s an established academic field about attacking and defending ML algorithms called adversarial machine learning. When we started, there were only few works, yet today there are over 5,000 academic research papers exploring vulnerabilities in AI systems,” says Neelou.
While the company’s technologies are focused on preventing attacks against AI systems – including a tool for the penetration testing of AI biometric systems – Polyakov points to several recent cases which demonstrate how AI is already being used as a weapon.
The main incidents relate to two deep fake audio scams that have occurred over the last couple of years and show how AI can be used to accurately alter the voice and appearance of an actor to sound and look like that of a specific chief executive.
The first involves an incident in the UAE in which fraudsters used a deepfaked voice of a company executive to fool a bank manager (who recognized the client’s voice) to transfer $35m to their possession so that they could complete on an acquisition deal.
A year earlier, the chief executive of a UK energy firm was also tricked into sending cash ($240K) to a Hungarian supplier after receiving a phone call from the CEO of his company’s parent firm in Germany. The executive was told that the transfer was urgent, and the funds had to be sent within the hour. He did so. The attackers were never caught.
“Although much voice AI still sounds robotic to many ears, experts warn that it is becoming more realistic.”
Although much voice AI still sounds robotic to many ears, experts warn that it is becoming more realistic. The more data you have and the better quality the audio, the better the resulting voice clone will be. And opportunities to capture voices have grown with the increased use of video conferencing during the pandemic.
Nicolay Gaubitch, director of research at Pindrop, a security firm that provides risk scoring for phone calls to detect fraud and authenticate callers, says that voice synthesis (making a machine sound like somebody) and voice conversion (making a human talker sound like someone else) are growing trends and fraudsters are increasingly taking advantage of innovative audio tools.
“Throughout the pandemic the use of video comms for both personal and work use increased. This has opened up new opportunities for fraudsters who already have voice channels as one of their preferred means of attack,” he says.
Polyakov adds: “We expect that such examples will grow, and we will see examples of not only audio but video such as Zoom calls with synthetically generated friends or bosses asking you to perform fraud actions.”
Attacking back against AI
While AI and ML-based solutions are a popular defence mechanism in cyber security, if a threat actor can get hold of a dataset and feed it with the wrong training data this may cause that system to behave erratically and allow it to be exploited by attackers.
“AI in this context works very similarly to AI itself – but for the bad people,” explains Brooks Wallace, VP of sales at Deep Instinct – a firm that sells deep learning-based cyber security solutions to combat this threat.
He continues: “It builds on a similar model to what the security software specialists using ML are building but it’s infecting malicious code to the AI experience which causes it to make a mistake or become ineffective.”
“If you can fool the machine learning model to either subvert it or force it to shut down, it can disable the security of an organization and that opens up the door for people to move laterally through a network inflicting damage as they go because there’s no security there,” Wallace adds.
According to Neelou there are “dozens of ways” to fool smart AI systems in mission-critical applications.
“Automated AI decisions could be easily manipulated, confidential details about AI algorithms and data could be extracted, and live AI systems could be infected to produce controllable or unusable result.”
“Automated AI decisions could be easily manipulated, confidential details about AI algorithms and data could be extracted, and live AI systems could be infected to produce controllable or unusable result.”
Wallace adds that a major concern regarding this type of attack is that it can be hard to detect – giving threat actors extra dwell time to run havoc on a system.
“Once SOC has identified a potential issue it’s often already too late because they’ve been exposed and all their customers have been exposed – from a supply chain attack perspective, that’s scary,” he says.
To counter these attacks Deep Instinct – whose clients include mid-enterprise healthcare firm One Blood and Japanese watch manufacturer Seiko – is using a deep learning framework for cyber security.
Deep learning algorithms – which are also used in applications by the likes of Amazon, Netflix and in autonomous vehicle systems – are designed to work like the human brain, enabling the AI to make autonomous decisions about what is ‘good’ and ‘bad’ data, rather than being fed with training data.
Wallace claims that not only do these algorithms predict and prevent attacks against AI before they take place, but they also significantly lower the number of alerts and false positives that firms receive, which he estimates will enable the SOC team to claw back up to a quarter of a week of their time.
Others believe that more work is needed at research level around deep learning before it is more widely deployed as a preventative measure in cyber security systems; they also point out that on an operation level more resources are required.
“To date, no efficient out-of-the-box defences for AI systems exist, and companies implement customized protections case-by-case,” says Neelou.
Future threat?
It’s also worth noting that there have been no widely reported cases of attacks against AI by manipulating its data sets – yet. There have been more basic cases of criminals duping biometric facial recognition systems by using manipulated personal information and high-definition photographs bought on the black market – but this type of activity hasn’t involve altering the systems’ ML processes.
Like quantum computing, while people know adversarial AI is coming, hard examples are few and far between and experts tend to present the issue in terms of a series of ‘what if?’ scenarios.
What if financial models are poisoned with wrong data? What if your super expensive AI model is stolen because of model extraction vulnerability? What if a self-driving car could be attacked by an evasion?
One aspect that Polyakov is certain about however, is that while adversarial AI will be less common than attacks on software in 2022, if any of these ‘what if’ examples materialise, they will be responsible for higher losses.
Taking it beyond cyber security: what if AI techniques are used to reverse-engineer sensor data to find the underlying model and decipher sensitive parameters of systems such as nuclear reactors?
“In the real world we anticipate a growth of attacks against AI systems with the focus on mission-critical applications where the cost of error is high (as well as attacker’s benefits),” says Neelou.
“Other areas which can be in danger in near future are those who already actively use AI such financial companies as well as Internet companies and all new autonomous and smart devices,” he adds.
Just as cybercrime has become more accessible to less skilled wannabe hackers via the proliferation of ransomware-as-service offerings, Neelou notes that while companies may spend millions of dollars to develop their AI models, attackers will only require “dozens of dollars” to run some of these AI attacks.
“It requires some expertise to know how to craft efficient attacks but as we saw in other security domains, attack tools are democratized quite rapidly,” he says.
#BeInformed
Subscribe to our Editor's weekly newsletter