The debate about the risks of AI has largely focussed on its security risks and deep-fakes, and more recently the dulcet tones of Scarlett Johansson. But what about its capacity to turn our current use of search tools (for the most part via Google) on its head? Anyone who has used ChatGPT 4 will understand that, as a means of providing information, it makes a traditional Google search look like child’s play.
If AI is going to dominate our online information tools, what impact is that going to have on our reputation, our data and our privacy rights? During the mid 2000s, defamation and privacy lawyers, like me, found our attention diverted away from cases against newspapers and, less frequently, broadcasters, towards the US corporates providing search engines and social media platforms.
[See also: The best reputation and privacy lawyers 2024]
Right to be forgotten
Google’s near-complete domination of the search engine space meant that the first page of a Google search result became the key battleground for brands and personal reputation. Following a landmark judgment in 2014 (the Google Spain case) , Google had to establish a mechanism to allow the removal of information in accordance with data protection principles, including the erasure of old, irrelevant or inaccurate information and giving effect to a ‘right to be forgotten‘. Lawyers and clients alike utilised this tool to ask Google to remove unwanted links from search results which could make a significant impact on how their reputations were reflected in almost all searches of their name.
Does this experience of Google help us when generative AI searches are the future? AI models are trained on a vast range of text from the internet and other publicly accessible text. They do not store this as ‘information’ like traditional databases but identify complex patterns learned during their training. Once trained, the model does not retain a record of where it learned its information. This means that it is not usually possible to locate and remove a specific fact or detail (eg an incorrect fact about a high-profile, public individual). However, AI providers, like Google, are data processors and clients should be able to rely on the same rights established in the Google Spain case to regulate search results.
[See also: The best reputation managers 2024]
Furthermore, by managing your online profile by traditional means, including via Google/Bing removals, you may be helping to improve generative AI outcomes, which rely on online sources for their responses. Likewise, clients should ensure accurate information about them is accessible on the internet. Absent any online source, generative AI responses can contain dangerous ‘hallucinations’, created text, in order to provide a response.
Data rights
What about the data rights of the users of AI services, rather than the subjects of searches? ChatGPT, like most online services, publishes extensive information on how data is used. It asserts that it does not use users’ data for selling its services, advertising or building profiles of people, and it provides settings to manage data retention. However, data is used to ‘train’ the AI algorithms to make its product more ‘helpful’. How this works in reality, will remain to be seen and little is known about the workings of the OpenAI, the organisation that is developing ChatGPT, aside from rumours of safety concerns, resignations and NDAs.
[See also: Will ChatGPT be the death of fund managers?]
On a positive note, in terms of being able to control the accuracy of information, we may see a resurgence of defamation claims in the event generative AI replaces Google searches. By producing original text by way of its responses, rather than simply providing a list of links to third-party websites (as Google does), ChatGPT and others will struggle to avoid responsibility as ‘publishers’ of defamatory content. When setting up your ChatGPT account you are told to check your facts and warned that it may not give accurate information, but these warnings are unlikely to absolve it of legal responsibility for any defamatory information it publishes.
One thing is clear, AI is going to usher in a whole new world in terms of our exposure to information technology. One of the many challenges for lawyers will be to enforce our clients’ rights and protect their reputations in this new world, and utilise the lessons we have learned over the last decade.
Dominic Crossley is a Payne Hicks Beach Partner, Head of Dispute Resolution and practices in the field of privacy and media law.