Deepfakes are a form of artificial intelligence used by scammers to create seemingly legitimate audio and video. Typically, this will involve scammers using large datasets of images, video and audio to replicate the voice and appearance of a famous face. Software is then used to make the ‘person’ say and do things before being posted online.
Already, this technology has been used for attempted financial fraud, with Martin Lewis being the victim of one recent attempt at impersonating him.
“Musk’s new project opens up new opportunities for British citizens. No project has ever given such opportunities to residents of the country”, the simulated Lewis says in the fraudulent clip, widely shared on social media. The video attempts to convince the viewer that Lewis endorses a supposed new investment opportunity from Elon Musk - but it is quickly apparent that no such scheme exists and Lewis himself has hit out against the risks such technology poses.
“These people are trying to pervert and destroy my reputation in order to steal money off vulnerable people, and frankly it is disgraceful, and people are going to lose money and people’s mental health are going to be affected,” he told the BBC.
Email phishing is nothing new - scammers have long been sending out emails purporting to be from a genuine source, such as a bank, technology provider or government department. The scammers will often attempt to make you visit a website that could lead to your bank details or other personal information being stolen.
But AI has revolutionised how scammers can produce the emails used to lure you in. ChatGPT, and other generative AI tools, can easily create bodies of text that impersonate the tone and coherence of legitimate messages for free. This means misspellings, clumsy grammar and other tell-tale signs of a fake email are harder to spot. Such software is freely available, and although ChatGPT has some built-in functions to stop it from being used for misuse, these can be easily subverted.
This is another form of deepfake AI, but rather than producing a video, voice cloning replicates the tone and language of an individual - ideal for convincing someone they’re having a genuine phone conversation with that person.
In one noteworthy example, a mother in America received a call appearing to be from her daughter in a state of distress asking for money. It soon transpired that the call was fake, and the daughters’ voice cloned using AI.
“I never doubted for one second it was her. That’s the freaky part that really got me to my core,” she told local news outlet AZFamily.
And according to tech security firm McAfee, it only takes three seconds of audio for scammers to put together a convincing AI voice clone.
Out of 7,000 people surveyed by the company, one in four said they had experienced an AI voice cloning scam or knew someone who had. Of people who reported losing money, 36% said they lost between $500 and $3,000, while 7% got taken for sums anywhere between $5,000 and $15,000.
We’ve all become accustomed to passwords, passkeys and biometrics to access our phones and banking apps. Even when setting up an account for digital-first banks like Monzo, you have to send a video of yourself saying a certain phrase.
But AI can be used to subvert these security checks, says Jeremy Asher, consultant regulatory solicitor at law firm Setfords, representing a “huge risk” for consumers and institutions alike.
“Fake videos and photographs of people who do not exist, yet appear to look like they have authority, are being generated with the use of AI. This ‘evidence’ is then used to pass identity and security checks, which can lead to a whole eruption of danger. Bank accounts can be accessed, transfers can be authorised, even fake assets are being created to secure financial loans,” he says.
What makes many AI-based scams dangerous is that they are far harder to spot than more conventional scams. Thanks to advances in technology, fake emails appear to be more genuine, while voice cloning and deepfake videos continue to improve.
“Distinguishing a deepfake video from a genuine one can be quite challenging due to the sophistication of the technology used to create them,” says Louise Cockburn, information security awareness and culture manager at the wealth manager Quilter, but that doesn’t mean AI scams are impossible to detect.
Take the recent Martin Lewis deepfake for example - did you spot the large number of inconsistencies, the clunky language or the robotic face movements? There were a number of giveaways in the video that are true for many deepfakes; here’s a rundown:
Pay attention to the face
Faces are notoriously hard for computers to get right, but humans are really good at noticing small issues or inconsistencies. If you’re watching a video you suspect could be a deepfake, look at the face and ask:
Does the skin appear to be too smooth or overly wrinkly? Does the person almost appear to get older and younger throughout the clip? Does the facial hair move in line with the jaw? Does the person blink too much or too little? Do the lips match what’s being said?
Spot discrepancies in the environment
Like faces, constructing an environment that matches the lighting conditions is tricky to do digitally.
Ask: Are shadows appearing in places where you would expect them to? Is the lighting consistent? Do reflections appear in any glasses? Does the lighting shift as the subject moves?
Use common sense
Many on social media were quick to discredit the Martin Lewis deepfake, noting that he does not talk about investments, nor would be use his platform to encourage people to invest their money in a certain way. So, if you watch a video featuring a famous face discussing something they wouldn’t usually talk about, take the clip with a pinch of salt.
“If a video seems suspicious or out of character for the person featured, it’s always worth further investigation,” Cockburn adds.
Broadly, AI will make it “much more complicated to spot scams,” says Matthew Berzinski, senior director at identity management firm ForgeRock, necessitating a vigilant approach from consumers.
“Remember, never provide any security information, agree to take any action, or click on a link via a communication originated ‘from your bank’. If you need to take action on your account, either call them directly on their support line, or go directly to their website where you can be sure you are speaking directly to them,” he says.