Skip to main content

In just a few years, AI deep fake technology has taught people to assume that nothing is true until proved otherwise. It’s sad. But it’s an opportunity for the digital identity sector, says Tim Green, MEF’s programme director for ID and Data…

There’s a clip of Tiger Woods that I regularly see on social media. He rolls off a long putt, and  before it’s anywhere near the hole he turns away and shakes hands with his opponent. He just knows that the putt will go in (which it does). He doesn’t even need to look.

I’ve seen this clip at least 10 times. But I’ve only just discovered it’s a fake.

That stunned me. Since AI went mainstream, we’ve all got used to seeing Donald Trump as a talking baby and such like. It’s taught us to be mistrustful of improbable video footage. But this Tiger Woods clip? It is completely plausible. 

Which means that we are now entering a new phase in human trust. For 100 years or so we basically accepted that if what was caught on camera it was (most likely) true. 

Now? Believe nothing till convinced otherwise.

For boomers like me, this is hard to accept. It represents a tragic chipping away at the world we used to know – a world where trust in authority, institutions and facts was (most of the time) assumed. 

The loss of trust is regrettable, and it seems like ‘default distrust’ is here forever. There’s no going back. But it doesn’t mean the fraudsters have won. It merely means that the digital identity industry has to develop AI-resilient technology that verifies liveness.

It’s different for younger people. Indeed, new research reveals they live in a world of what me might call ‘default distrust.’ 

Jumio, an identity verification specialist, has just released its 2025 Jumio Online Identity study. It features insights from 8100 consumers across US, UK Singapore, and Mexico. Headline conclusion? Trust in digital life is crumbling under the weight of synthetic identities, deepfake videos, and botnet-driven account takeovers.

Here are some headline stats:

  • 69 percent of respondents say AI-powered fraud now poses a greater threat to personal security than traditional forms of identity theft
  • 99 percent are more skeptical of online content than they were last year
  • Only 37 percent believe most social media accounts are authentic
  • 76 percent are worried about fake digital IDs generated with AI
  • 75 percent are worried about scam emails using AI to trick people into giving away passwords or money 
  • 74 percent are worried about video and voice deepfakes 
  • 72 percent are worried about being fooled by manipulated social media content 

Just as interesting is the question of what people plan to do about this tidal wave of threat. 93 percent believe they have to defend themselves, rather than rely on government agencies (85 percent) or Big Tech (88 percent). 

Not that they want to. When asked who should be most responsible for stopping AI-powered fraud, 43 percent sited Big Tech, compared to just 18 percent who chose themselves.

Of course, while this research is consumer-focused, the AI deep fake threat impacts organisations too. Numerous examples show how criminals use voice and video simulations to trick employees into revealing sensitive information. And new fraud tech is constantly accelerating the threat. Case in point? Native virtual camera attacks that take over a person’s phone/laptop camera and inject synthetic content into it. The user believes the fake video to be a genuine camera stream.

The loss of trust is regrettable, and it seems like ‘default distrust’ is here forever. There’s no going back. But it doesn’t mean the fraudsters have won. It merely means that the digital identity industry has to develop AI-resilient technology that verifies liveness. In other words, some combination of unique digital credentials combined with biometric tests. 

It won’t be easy. The tech will need to be robust yet also user friendly. And the fraudsters will constantly attack it. We can’t watch Tiger Woods clips with the childlike credulity of before. But progress is being made. 

 

Find out more about the themes discussed –  Join the MEF ID & Data Interest Group.

Tim Green

MEF Programme Director, ID and Data 

  

Leave a Reply

Share
MEF