View all newsletters
Have the short, sharp Spear's newsletter delivered to your inbox each week
  1. Wealth
February 24, 2025

Deepfake fraud jumps by more than 2000 per cent in three years

Businesses are urged to adopt advanced security measures to combat an alarming rise in AI-driven identity fraud

By Suzanne Elliott

Evolving AI-based techniques are posing new security challenges to financial institutions with deepfake fraud rising by 2137 per cent in just three years, according to a new report.

Data from Signicat’s The Battle Against AI-Driven Identity Fraud report lays bare the risks deepfakes, which use artificial intelligence to generate highly realistic digital forgeries, pose to businesses and the necessity of enhanced fraud prevention measures.

Signicat’s report revealed 42.5 per cent of fraud attempts in the financial sector now involve AI. Just three years ago, deepfake-related fraud was not even among the top three digital identity threats. Today, it has become the most prevalent form of digital identity fraud, with deepfake forgeries becoming both more widespread and harder to detect.

Select and enter your email address The short, sharp email newsletter from Spear’s
  • Business owner/co-owner
  • CEO
  • COO
  • CFO
  • CTO
  • Chairperson
  • Non-Exec Director
  • Other C-Suite
  • Managing Director
  • President/Partner
  • Senior Executive/SVP or Corporate VP or equivalent
  • Director or equivalent
  • Group or Senior Manager
  • Head of Department/Function
  • Manager
  • Non-manager
  • Retired
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.

[See also: Why HNWs are right to be worried about the threat of deepfakes]

The study, a collaboration between independent research firm Censuswide and Signicat, of more than 1,200 respondents from the financial and payment sectors across seven European countries, including the UK, found that account takeover fraud is the most common form of attack customers face. This was followed by card payment fraud and phishing, further highlighting the growing challenge of digital identity theft.

It is the first report to focus on AI-driven identity fraud and revealed that deepfake technology is now among the top three most prevalent forms of fraud in the financial and payments industry in Europe.

As deepfake technology continues to evolve, businesses must take proactive steps to secure their operations and protect customers from increasingly sophisticated digital threats, the report’s authors said.

Content from our partners
AI, growth and public policy: What is the future for Britain?
Never tired of Whitehall
Curating unparalleled residential elegance

Understanding deepfake fraud

Deepfake fraud manifests primarily through two key attack methods:

  • Presentation attacks: These include fraudsters using masks and makeup to impersonate others, as well as displaying deepfake videos in real-time via another screen during fraudulent transactions such as account takeovers or loan applications.
  • Injection attacks: These involve malware or manipulated inputs being inserted into a system to compromise its integrity. Such attacks often use pre-recorded deepfake videos during onboarding or Know Your Customer (KYC) verification processes in banks, fintech firms, and telecommunications companies.

As these techniques become increasingly sophisticated, traditional fraud detection systems are struggling to keep pace with the evolving threat landscape.

[See also: Warren Buffett slams deepfakes after he’s targeted in US presidential campaign posts]

Delayed adoption of fraud detection tools

Despite the sharp rise in AI-driven fraud, including deepfake-related scams, only 22 per cent of financial institutions have adopted AI-based fraud prevention solutions. This delay leaves many organisations vulnerable to sophisticated cyberattacks.

[See also: The best reputation and privacy lawyers]

'Three years ago, deepfake attacks accounted for just 0.1 per cent of all fraud attempts we detected. Today, they make up approximately 6.5 per cent, or 1 in 15 cases. This marks an astonishing 2137 per cent increase,' explained Pinar Alpay, chief product and marketing officer at Signicat. 'Fraudsters are leveraging AI-based techniques that traditional security systems are unable to fully detect. Companies must invest in advanced detection mechanisms combining AI, biometrics, and identity verification to stay ahead of these threats.'

Alpay emphasises that a multi-layered approach is essential. 'By integrating early risk assessments, robust identity verification, facial biometrics, and continuous monitoring, companies can enhance their fraud prevention capabilities. The key lies in orchestrating these tools effectively for optimal protection.'

Strengthening cybersecurity measures

The sharp rise in deepfake fraud is part of a broader trend of AI-driven financial crime, the report highlighted. Cybercriminals are increasingly exploiting advanced technologies to infiltrate financial systems, making it imperative for organisations to upgrade their security infrastructure.

To combat these evolving threats, financial institutions should consider the following preventive actions:

  • Enhance fraud detection systems with AI-driven technologies
  • Educate employees and customers on emerging fraud tactics
  • Invest in AI-based fraud prevention solutions to stay ahead of cybercriminals

Select and enter your email address The short, sharp email newsletter from Spear’s
  • Business owner/co-owner
  • CEO
  • COO
  • CFO
  • CTO
  • Chairperson
  • Non-Exec Director
  • Other C-Suite
  • Managing Director
  • President/Partner
  • Senior Executive/SVP or Corporate VP or equivalent
  • Director or equivalent
  • Group or Senior Manager
  • Head of Department/Function
  • Manager
  • Non-manager
  • Retired
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
Thank you

Thanks for subscribing.

Websites in our network