1. Wealth
March 30, 2026

The growing threat AI chatbots pose to your reputation

As AI chatbots continue to become more sophisticated, public figures need to treat them as a serious reputational tool, experts say

By Christian Maddock

For those in the public eye, one’s online presence is often as important as real-life interactions when it comes to cultivating a strong reputation. While social media platforms and search engines such as Google have dominated access to information about public figures over the past decade, AI chatbots have introduced a new frontier in high-profile HNWs’ efforts to protect their reputations.

These large language models (LLMs) are highly advanced AI systems trained to understand, generate and summarise large swathes of information. ChatGPT, arguably the best-known LLM, had 800 million active monthly users in April 2025, double the number recorded in February of the same year, according to data from research firm Demandsage. Grok, the AI chatbot owned by the world’s richest man, Elon Musk, saw its monthly users almost quadruple from February to March 2025, from 51.5 million to 190 million.

Unlike Google and other traditional search engines, which use relevance, quality and context to determine which web pages and articles appear at the top of results, LLM search outputs are more complex. These chatbots interpret an enquiry, conduct real-time web searches, extract passages from online articles and summarise the material in an easily consumable format.

[See also: The best reputation managers in 2026]

AI has made it easier than ever to access information about the personal lives and professional histories of HNWs in the public eye and senior business leaders.

It is no longer straightforward to maintain a discreet public profile in the age of AI, argues specialist crisis support lawyer Alex Just of Forward Global. What was once a matter of addressing individual articles carrying potentially negative information now requires a far broader strategy.

‘You have to shift from secrecy to strategic visibility,’ Just says. ‘You cannot simply delete the bad – you must dilute it with high-quality, authentic thought leadership that owns the primary search results.’

He adds: ‘Regularly engaging with journalists in the background is essential. You need a trusted network in relevant media markets who understand the nuance of your activities, ensuring that the data feeding these algorithms is accurate at the source.’

Content from our partners
Lagos Private Wealth Conference 2025: Shaping Africa’s Legacy of Prosperity
From bold beginnings to global prestige: the legacy of Penfolds Bin 707
The Windsor is bringing seamless luxury to Heathrow

While a carefully constructed media presence can benefit HNWs, engaging in the public sphere can also have unintended consequences when it comes to LLMs.

Brooklyn Peltz Beckham, the eldest child of football star Sir David Beckham and former Spice Girl and fashion designer Lady Victoria Beckham, took to Instagram on 19 January to criticise his relationship with his parents and how they have treated his wife, Nicola Peltz Beckham. When ChatGPT was asked to ‘tell me about Brooklyn Peltz Beckham’, after outlining details of his early life and career, it addressed these recent developments.

The AI chatbot said: ‘Brooklyn has been in a very public family dispute with his parents … he said he did not wish to reconcile with them, and described feeling humiliated by incidents at his 2022 wedding, including alleging his mother’s interference with the first dance.’

When asked to ‘tell me about Victoria Beckham’, who has not publicly commented on the alleged rift with her son, ChatGPT’s response differed. The chatbot did not include any information about the ongoing tensions, instead summarising her career, personal life and legacy.

[See also: Reputation in the age of AI]

While the information in these LLM responses was sourced from reputable outlets such as Reuters and the Guardian, misinformation can also spread in the AI sphere, says Ben Ullmann, the chief executive of reputation management firm Sanctuary Counsel.

‘You can ask for well-sourced notes and receive something that looks convincing, complete with links,’ Ullmann says. ‘But when you check the sources, hallucinations and misattribution are common. Results can also change from one search to the next.’

Ullmann adds that he is concerned about the confidence with which AI chatbots make certain claims, particularly since many remain prone to factual errors.

‘The ease with which LLMs confidently make assertions does concern us,’ he says. ‘From a reputation perspective, that matters and could have adverse consequences. When a media outlet makes a mistake, there are established routes for redress. With LLMs, it is not yet clear how or even if those safeguards will work.’

Another issue is the ability of individuals to influence the spread of misinformation, says reputation manager Henry Sands, the founder and managing director of SABI Strategy Group.

‘We are increasingly seeing coverage pop up specifically produced with the intention of damaging a reputation, knowing it will be picked up by AI coverage,’ he tells Spear’s. 

Reddit, a social media platform that functions as an online forum where users share content and opinions within subject-specific communities known as subreddits, has also influenced AI search results. With a decentralised moderation system often overseen by volunteers, the quality of information varies widely between communities, meaning misinformation and exaggeration can spread. OpenAI, the creator of ChatGPT, has signed a deal with Reddit allowing the LLM to draw on the site’s data, which continues to influence AI-generated content.

To counter such tactics, proactively disseminating accurate information can help shape more positive AI chatbot results, Sands argues.

‘We need to protect clients from this and ensure that the coverage they put out does not just disappear into the ether but is reinforced with the correct tools to get the appropriate traction,’ he says.

[See also: As Keir Starmer weighs curbs on children’s access to AI, experts urge schools to embrace it cautiously]

Websites in our network