The capabilities and potential risks of Artificial Intelligence (AI) are to be discussed over the next two days at the first global summit on AI safety.
Rishi Sunak is today welcoming international governments, leading AI companies, civil society groups and experts in research to Bletchley Park, the home of Britain’s World War II code-breaking facilities, as he attempts to position the UK as a leader in the field.
Reports suggest the prime minister will ‘use discussions at the summit as the basis for a global advisory board for AI regulation, modelled on the Intergovernmental Panel on Climate Change (IPCC),’ according to Sky News.
Meta, Google and OpenAI are among the companies who will be represented. US vice president Kamala Harris and European Commission president Ursula von der Leyen are among the international leaders who will discuss the rise of AI and how to tackle the technological revolution.
AI: a technological revolution
Ahead of the two-day event, British Prime Minister Rishi Sunak delivered a speech in which he said AI brings ‘the chance to solve problems we once thought beyond us – but it also brings new dangers and new fears.’ He warned: ‘Criminals could exploit AI for cyberattacks, fraud or even child sexual abuse… there is even the risk humanity could lose control of AI completely through the kind of AI sometimes referred to as super-intelligence.’
Highlighting the scale of the risk, he added: ‘Mitigating the risk of extinction by AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’
The speech came after the Department for Science, Innovation and Technology released a report which notes this ‘technological revolution’ will ‘fundamentally alter the way we live, work, and relate to one another’. AI presents opportunities to advance drug discovery, improve treatment and diagnosis of disease and better public services, the report notes, but these are accompanied by serious risks that ‘could threaten global stability and undermine our values’.
The report adds: ‘To seize the opportunities, we must understand and address the risks. AI poses risks in ways that do not respect national boundaries. It is important that governments, academia, businesses, and civil society work together to navigate these risks, which are complex and hard to predict, to mitigate the potential dangers and ensure AI benefits society.’
Why AI poses a risk to UHNWs
Ultra–high-net-worth individuals are ‘particularly vulnerable’ to the risks of AI because they ‘have reputations they wish to protect and wealth that will motivate a hostile adversary’, warns Andrew Wordsworth, co-founder of Raedas, an investigations firm specialising in litigation support.
Deepfakes and the threat to reputation
‘AI is already capable of producing high-quality fake audio and video, which could easily be weaponised against UHNWs – particularly those in the public eye,’ observes Matthew Lane, a former UK intelligence officer turned co-founder and director of XCyber, a cyber security firm that offers state-grade services to its clients. ‘Left unchallenged, such material could ultimately affect personal safety too.’
Wordsworth also emphasises the risks posed by these so-called ‘deepfakes’. ‘Where historically the worst that one could expect to happen was a clumsy Photoshop, now adversaries can fake complex and nearly perfect videos,’ he explains. ‘A few years ago, making and transposing the image of an individual required all the resources of Hollywood. Now it requires only a savvy teenager in Irkutsk or Manila.’
Join-the-dots attack on privacy
Sharing personal information on social media and other online platforms has long put UHNWs at risk of cyber attacks and even physical crime. But this is exacerbated by AI.
‘The power of AI technology to process information could lead to privacy impacts – it becomes much easier to aggregate disparate pieces of information to provide insights that would otherwise be difficult to ascertain,’ says Lane.
Fraud on the rise
Fraud has already become far more advanced as a result of developments in AI. Wordsworth explains: ‘We are rapidly moving into a world where voices and writing styles can be imitated with terrifying accuracy. We are already seeing what seem to be AI generated emails in “director fraud” situations.
‘AI voices have been used to authorise the transfer of funds from bank accounts (sums upwards of £35 million in some cases). For many years, we have been able to rely on bad quality fake emails from obvious fraudsters, and even then they were often successful, now that begging email from your grandchild will be perfect, taking advantage of their active Twitter habit.’
He says: ‘UHNWs often have numerous assets spread across a number of countries. Some of these destinations might have weaker IT security infrastructure than the UK and other nations with sophisticated cyber-security measures.’
Concerns over security are not unfounded. The picture is worrying, even before AI is taken into account. Recent data has revealed a lack of confidence among family offices in their digital security protocols.
According to UBS’s Global Family Office Report, released at the end of May, fewer than half of family offices (44 per cent) report having cyber security controls in place. And of those that do, only 15 per cent say these measures are ‘highly advanced’. The same report revealed that 37 per cent of family offices had been the victims of cyberattacks.
In a survey of family offices in 2020, Chicago-based wealth manager Northern Trust revealed that an overwhelming 96 per cent of family offices had experienced at least one cyber attack.
As well as cybercrime and phishing scams, traditional scams are also on the rise. According to the Financial Crimes Enforcement Network (FinCEN), rising mail thefts in the US have resulted in a rise in cheque fraud; in 2022, the organisation revealed that suspicious activity reports (SARs) related to cheque frauds rose to 680,000, nearly double the 350,000 recorded in 2021.
Steps to take to mitigate the risks of AI
Identify the dangers
Mitigating the risks of AI begins with threat identification. All experts advise enlisting the support of a third party expert. A number of security, intelligence and investigations advisers operate in this field.
Highlighting the services offered by XCyber, Lane says: ‘We help private clients conduct cyber health checks which over security, reputation and privacy issues and make actionable recommendations to mitigate identified risks. We likewise assist companies with protecting their people and their systems from risk, using our specialist data and expertise to keep them safe and secure.’
At Octaga, Allison employs experts from across a number of fields, ‘from personal protection to surveillance operators, to the technical security arena, employing our own in-house engineers’. He adds: ‘Octaga consultants have authentic experience, providing the level of advice that is proportionate to threat and risk.’
At Raedas, which acts on behalf of leading law firms and corporations, as well as UHNWs, there is a focus on all elements of tackling increasingly prevalent black PR campaigns. ‘This covers identifying the individuals behind the campaign, their motivations for running it, who they are working with, we conduct the forensic work to determine that the emails are fake.’
Reassess existing systems
Allison urges UHNWs to re-examine the structures used to manage their assets. ‘Having assets unprotected or secured with sub standard systems or companies that fail to have adequate protocols in place should be considered and avoided,’ he warns.
There are a number of practical measures UHNWs can implement immediately to mitigate the risk of AI.
‘Use multi-factor authentication, close accounts that are no longer used, conduct regular privacy and security audits, use strong passwords,’ says Lane. ‘Against fake audio and video, UHNWs may wish to consider how they would verify or challenge the veracity of material. Is there a specific channel where a statement could be made that is trusted? If that’s an online account or website, one should pay particular attention to its security.’
Wordsworth explains UNHW clients might want to consider returning to face-to-face meetings with wealth managers. He adds: ‘The vulnerability shared by all UHNW individuals is wealth. For wealth, the defence is now to move back to earlier solutions, face to face meetings and phone calls. Just because an email is written in the style of your private banker doesn’t mean that it is from your banker.’
He also emphasises the benefits of stepping back from social media: ‘For those with a higher profile of where their reputation is crucial be aware that deep fake video requires high quality data to work from, we suspect the days of Instagram selfie videos are coming to an end. The more data you put out about yourself the more you equip the hostile against you.’