View all newsletters
Have the short, sharp Spear's newsletter delivered to your inbox each week
  1. Law
July 18, 2024

Why HNWs are right to be worried about the threat of deepfakes

Deepfakes present a wide array of threats to HNWs, and as AI becomes more advanced, so do the associated risks. Spear's experts on how to mitigate the growing threat - and what do to if you are a victim

By Suzanne Elliott and Emily Formstone

Deepfakes – digital forgeries created using advanced artificial intelligence (AI) to manipulate images, audio, and video – pose a growing risk to high-net-worth individuals

Deepfakes can depict scenarios that never happened, such as meetings, events, or locations. Video deepfakes can create particularly persuasive false narratives. 

As artificial intelligence becomes more advanced, so do the threats associated with deepfakes. These fabricated media can be used to blackmail, humiliate, and exploit HNWs, as well as severely damaging personal and professional reputations.

Select and enter your email address The short, sharp email newsletter from Spear’s
  • Business owner/co-owner
  • CEO
  • COO
  • CFO
  • CTO
  • Chairperson
  • Non-Exec Director
  • Other C-Suite
  • Managing Director
  • President/Partner
  • Senior Executive/SVP or Corporate VP or equivalent
  • Director or equivalent
  • Group or Senior Manager
  • Head of Department/Function
  • Manager
  • Non-manager
  • Retired
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.

[See also: Why HNWs must act now to fight the rise of disinformation]

A recent report found fraudsters were increasingly targeting CEOs of high-profile companies. The Times report found at least five FTSE 100 companies had been victims of these attacks, including Octopus Energy, discoverIE and the chief executive of WPP, whose voice was cloned on a company call.

Spear’s 500 reputation and privacy lawyers set out the risks of deepfakes to HNWs, and what to do if you are a victim.

Deepfakes and HNWs: a growing risk 

A Top Flight Spear’s reputation and privacy lawyer and head of Simkins reputation protection team Gideon Benaim says his HNW clients are increasingly worried about the potential for deepfakes to be used against them in damaging ways. 

Content from our partners
Why a patient-first approach is key in healthcare
Abu Dhabi: How the 'capital of capital' became a magnet for UHNWs
Abu Dhabi Finance Week in the 'Capital of Capital'

Benaim says Simkins’ clients have encountered a range of deepfake-related issues, including fabricated instances of nudity and sexual activities, as well as fake meetings designed to suggest wrongdoing or hypocrisy. 

David Engel, a partner at Addleshaw Goddard

Mark Jones, partner at Payne Hicks Beach tells Spear’s: ‘As AI becomes ever more sophisticated, so do the threats. Deepfakes can be used to blackmail, humiliate and exploit (U)HNWs.  

He warns that the ‘relative ease with which deepfakes can be created means that those creating deepfakes require very little skill or equipment’

Jones says PHB are currently advising an individual targeted by sexually explicit deepfakes while on the fraud side, clients have been tricked into handing over money through false investments or blackmail. 

[See also: How should UHNWs protect against the risks of AI?]

A notable case in Hong Kong involved an employee duped into paying $25 million to fraudsters using a deepfake of the company’s CEO, he says. 

David Engel, a Spear’s Top Flight lawyer and partner at Addleshaw Goddard, says UHNWs face an additional risk area as they can impact both their personal and business interests, and pose a significant reputational risk. 

And, perhaps contrary to what many may think,  it is younger people who are statistically more likely to fall for online scams, Engels.

Amy Bradbury, partner, and Lucy Rice-Mellor, associate, reputation management, Harbottle & Lewis say they have also seen a rise of deepfakes being deployed in litigation or disputes. ‘Fake documentation and witness evidence utilised in the context of litigation has the potential to more broadly affect our justice system,’ they say.

The rising problem of deepfakes 

The problem of deepfakes is expected to grow, both in scale and sophistication. As the technology advances, identifying fake media will become increasingly challenging. The rapid dissemination of information online means that individuals targeted by deepfakes have little time to respond before significant damage is done. 

Jones says the threat of deepfakes is significant and difficult to fully quantify due to its pervasive nature across personal and business aspects of life.

In February 2024, Meta’s president of global affairs, Nick Clegg, stated that identifying AI-generated content is not yet possible with certainty, underscoring the gravity of the threat. The incidence of deepfake fraudulent identity certification, such as logging into banking apps, has increased by 3,000% in the past year, Jones notes.

‘Deepfakes are likely to become a part of daily life as AI advances. With that comes a need for heightened security and awareness of the threats,’ says Jones. 

Deepfakes are also being used to facilitate sexual abuse and harm, with some tragic cases resulting in victims taking their own lives. As AI advances, deepfakes are likely to become more common, necessitating heightened security and awareness of these threats.

How to take preventative measures against deepfake scams

Preventing deepfakes before they emerge is unrealistic, say Spear’s experts, as awareness typically follows the creation and dissemination of the fake content. 

A ‘prevention-is-better-than-the-cure’ approach is usually preferable, Bradbury and Rice-Mellor suggest.

‘There is progress being made in technology which can detect deepfakes,’ they say.

‘This is likely to be useful on social media or for open-source material. Deepfakes are not always deployed in the public domain, however, and if used within an organisation, family office or place of employment, they may be harder to detect. Continuous monitoring, user awareness and training, including tools in place to authenticate unusual requests, can be useful in preventing deepfake attacks.

Gideon Benaim is a Simkins reputation law partner and head of the firm’s reputation protection team.

But there are ways to reduce your risk, Spear’s experts agree. 

This includes staying informed about AI developments, while limiting the amount of shared personal data, such as photos and videos, reduces the risk of being targeted by deepfakes. Jones suggests watermarking photographs can also deter deepfake creators. 

[See also: The best security, intelligence & investigations advisers for high-net-worth individuals in 2024]

HNWs can protect themselves by employing sophisticated monitoring solutions to detect deepfakes early. Rapid response is crucial to prevent these images from spreading while a combination of technical, communication, and legal strategies is essential to remove or discredit deepfake content swiftly.

Implementing multi-factor authentication beyond banking transactions to other accounts can prevent unauthorised access to personal data, he adds. 

But, as Benaim tells Spear’s ‘there is no realistic way to prevent’ deepfakes once they are out in the digital world.

‘Clients will need to have ever more sophisticated monitoring solutions for such issues and react extremely quickly to ensure that such images don’t move from low profile questionable platforms to more reputable ones, he says. 

‘A mixture of technical, communications and legal interventions and action will help to ensure that the images in question are removed or discredited swiftly.’

Responding to a deepfake attack

‘People will find it more difficult to be able to identify a fake and given the speed at which stories spread, time is not on the side of individuals who have been deep faked,’ Benaim says. 

Acting as quickly as you can, though, is crucial: if individuals suspect they have been targeted by a deepfake attack, they should seek advice from crisis management experts specialising in technical, communications, and legal fields. 

‘The first thing to do is not to panic. Pause, take time to consider the position and seek legal advice. As a practical step, do not be tempted to wipe any device or delete messages as they may contain vital information,’ advises Jones. 

Professionals can provide the necessary support to mitigate the impact of the attack, ensuring that the false content is addressed promptly and effectively.

[See also: Defuse’s Philip Grindell on helping security-conscious HNWs with cybercrime]

Engels says: ‘Get expert assistance – from your lawyers, communications advisers, cyber specialists and/or investigative service providers.  They may all have a role to play.’

If a criminal offence has been committed, report it to the police, but even if there is no criminality involved,’ the civil law will very likely be on your side – some or all of the laws of defamation, privacy, data protection and confidentiality may be in play’, he says. 

‘Depending on how much you want to spend, you should be able to get the offending material taken down and obtain compensation for your costs and for reputational damage and distress.’ 

As the technology behind deepfakes continues to evolve, so too must the strategies employed to combat them. In the meantime, HNWs must stay vigilant to the risks.

Select and enter your email address The short, sharp email newsletter from Spear’s
  • Business owner/co-owner
  • CEO
  • COO
  • CFO
  • CTO
  • Chairperson
  • Non-Exec Director
  • Other C-Suite
  • Managing Director
  • President/Partner
  • Senior Executive/SVP or Corporate VP or equivalent
  • Director or equivalent
  • Group or Senior Manager
  • Head of Department/Function
  • Manager
  • Non-manager
  • Retired
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
Thank you

Thanks for subscribing.

Websites in our network