This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Microsoft calls for more AI accountability
Even though cybercrime and online fraud make up half of the UK’s crime figures, only 1% of the country’s police force is dedicated to tackling it.
Businesses aren’t doing any better, according to cyber sec firm Vanta, which details how most only dedicate 9% of their IT budgets to cyber security, even though half of UK businesses experienced an attack last year.
The US’s Financial Industry Regulatory Authority, said that AI is an ‘emerging risk’ in its latest annual report, with GenAI becoming a practical tool for cybercriminals around the globe, to cause disruption to businesses, individuals, and politics.
Security should be a top concern during 2024, an election year in almost 60 countries, a year in which deepfakes and AI are proving to be a threat to democracy with politicians worldwide falling victim to misleading-yet-realistic videos, audios, and images posted of them online.
Perhaps one of the reasons more resources aren’t dedicated to stamping out cyber crime is that the whole business feels anonymous and intangible. As Sarah Armstrong-Smith, Microsoft’s chief security advisor puts it: “Cybercrime and fraud is kind of an invisible world.”
With most online threats going through private accounts, it’s not as easy to visualise the impact it has. However, with the rise of deepfakes, businesses are hyper-aware of how this advanced technology can impact its reputation as well as its bottom line.
Year of elections: a deepfake threat on politics and business
While social media giants are taking steps to tackle misinformation, and businesses can make the choice to invest more and raise awareness to tackle any scams – what can AI firms do to mitigate the risk?
TI joined a round table with Simon McDougall, CCO of software firm ZoomInfo, Jadee Hanson, CISO of Vanta, and Microsoft’s Armstrong-Smith, to discuss the threat AI is causing and how responsible software developers can act.
The risk of AI
AI has supersized phishing emails, putting even discerning firms at risk. Hanson spells out the situation as she sees it now.
“It’s gone from getting a kooky email, where everything is spelled wrong, to now leveraging AI to understand all aspects of an organisation to carefully curate a perfect email that will get people to click on a link and download malware,” she says.
With the help of sophisticated large language models, AI’s conversational ability means it can copy the tone of a specific person or employee. Worse, it can copy the voice and face of any team member.
“There are services out there that will detect if something has been AI-generated,” McDougall points out
These services use AI to determine whether something is a deepfake. However, the issue with this is that a deepfake generator and an AI detector will now always be learning to out-do each other, so there’s an inherent tension happening constantly.
Web scraping vs automated political misinformation
“What worries me more is that in the last year or so, it’s gotten really easy to mass-produce these deepfakes,” says McDougall.
“If you’ve got publicly available video footage of 30 of your middle managers, they can run those all into deepfakes.”
These kinds of advances are exactly why Microsoft’s president, Brad Smith, was determined to carry on developing AI when industry professionals called for a six-month AI-development pause, Armstrong-Smith claims.
“In fact, our president Brad Smith said no, no, no, we need to speed up, we need to get regulators on board, and we need to have these open transparent discussions,” says Armstrong-Smith.
She explains that Microsoft has been developing AI for eight years now: “So this is nothing new.”
What’s most important to focus on now, argues Microsoft, is for developers of the technology to take accountability for what they build.
“Ultimately, it’s still a software programme. It’s being developed by humans, and it’s being trained on data that’s been created with humans,” she says.
Armstrong-Smith says developers need to question how their models are being utilised, and to take charge.
For instance, for deepfakes, many accessible AI image generator platforms such as Microsoft’s own, and Adobe’s Firefly, don’t allow the creation of images including real people.
However, she explains that the difference between AI and other software programmes is that the parameters of most software programmes are clear and understood.
“You can see this is what we’ve felt the level of functionality is; this is how we tested, and these are the outcomes we’re expecting,” she says. However , with GenAI, those parameters are vast, and there is an amplified demand for more control over what the software produces.
“We need to have some sort of duty of care. Diligence and compliance become even more important in this realm,” she says. “We have to make sure we’re collaborating on all of these things together.”
Armstrong Smith adds that an industry-wide approach to harmful AI is also needed. “For all the companies who are trying to do the right thing and have ethical processes and responsible AI, there’s a second group that isn’t, and we have to be mindful of that and the consequences and how we’re going to deal with that as an industry.”
#BeInformed
Subscribe to our Editor's weekly newsletter