AI-generated misinformation and disinformation are set to be the biggest short-term global risks of the year, according to the World Economic Forum. With half of the global population participating in elections this year, misinformation in the form of deepfakes poses a particular danger to democracy. Ahead of the UK General Election, candidates were warned that AI-generated misinformation would circulate, with deepfake video, audio and images being used to troll opponents and fake endorsements.
In recent years, low-cost audio deepfake technology has become widely available and far more convincing. Some AI tools can generate realistic imitations of a person’s voice using only a few minutes of audio, which is easily obtained from public figures, allowing scammers to create manipulated recordings of almost anyone.
But how true has this threat proven to be? Has the deepfake threat proven overhyped, or is it flying under the radar?
Chief of Digital Identity at Jumio.
Deepfakes and disinformation
Deepfakes have long raised concern in social media, politics, and the public sector. But now with technology advances making AI-enabled voice and images more lifelike than ever, bad actors armed with AI tools to create deepfakes are coming for businesses.
In one recent example targeting advertising group WPP, hackers used a combination of deepfake videos and voice cloning in an attempt to trick company executives into thinking they were discussing a business venture with peers with the ultimate goal of extracting money and sensitive information. While unsuccessful, the sophisticated cyberattack shows the vulnerability of high-profile individuals whose details are easily available online.
This echoes the fear that the sheer volume of AI-generated content could make it challenging for consumers, to distinguish between authentic and manipulated information, with 60% admitting they have encountered a deepfake within the past year and 72% worrying on a daily basis about being fooled by a deepfake into handing over sensitive information or money, according to Jumio research. This demands a transparent discourse to confront this challenge and empower businesses and their end-users with the tools to discern and report deepfakes.
Fighting AI with AI
Education about how to detect a deepfake alone is not enough, and IT departments are scrambling to put better policies and systems in place to prevent deepfakes. This is because fraudsters are now using a variety of sophisticated techniques such as deepfake faces, face morphing and face swapping to to impersonate employees and customers, making it very difficult to spot that the person isn’t who you think they are.
Although cybercriminals are now finding fraud more fruitful, advanced AI can also be the key to not just defending against, but actively countering deepfake cyber threats. For businesses, ensuring the authenticity of individuals accessing accounts is crucial in preventing fraudulent activities such as account takeovers and unauthorized transactions. Biometric-based verification systems are a game-changer in weeding out deepfake attempts. Using unique biological characteristics like fingerprints and facial recognition to verify consumer identities during logins makes it significantly harder for fraudsters to succeed in spoofing their way into accounts. Layering these verification systems together using multiple biometric markers makes for an extremely tough account security system to beat.
But that’s not all. AI can step up the game even further by detecting fraudulent activities in real-time by using predictive analytics. Picture machine learning algorithms sifting through mountains of data, picking out unusual patterns that might indicate fraud. These AI systems are like watchdogs, with the ability to constantly learn how fraudsters behave compared to how typical, legitimate users act. For example, AI can analyze the typical use patterns of billions of devices and phone numbers used to log in to critical accounts where personal information is stored, such as email or bank accounts, to detect unusual behavior.
For example, when a new user is setting up an account with your business, it’s no longer enough to check their ID and let them upload a picture of their selfie. You need to be able to detect deepfakes of both the ID and the selfie through real-time identity verification measures. This involves using advanced selfie verification and both passive and active liveness detection that can catch spoofing attacks.
To truly prevent deepfakes, the solution must control the selfie process and take a series of images to determine whether the person is physically present and awake. Biometric technology can then compare specific facial features from the selfie — such as the distance between the eyes, nose, and ears — against those of the ID photo, ensuring they’re the same person. The selfie verification step should also offer other biometric checks such as age estimation to flag selfies that don’t appear to match the data on the ID.
The future of deepfakes
For the remainder of 2024 and beyond, the potential of AI-generated content driving disinformation to disrupt democratic processes, tarnish reputations and incite public uncertainty cannot be underestimated.
Ultimately, there is no exact approach to effectively mitigating the threat of deepfakes. The key lesson here companies should take from the rise of AI-infused fraud is not to neglect their own use of AI to bolster defenses.
Fighting AI with AI offers businesses their best chance of handling the ever-increasing threat volume and sophistication.
We’ve listed the best identity management software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
+ There are no comments
Add yours