AI is creeping into almost every industry, and wealth management is no exception. But what can it really do when it comes to advising UHNW clients? Some firms are already experimenting, but its practical impact is still up for debate. Spear’s spoke to three experienced wealth advisers, putting several large language models (including Chat GPT 5.3 and Claude Sonnet 4.6) through a client scenario in which a 35-year-old tech founder planning a £100 million exit sought a comprehensive financial plan. We asked them to reflect on the LLMs’ efforts.
What impressed you the most?
Francesco Grosoli, CEO, CMB Monaco
The answers were extremely comprehensive. The LLMs consider the different angles and layers of a single issue and go into considerable detail on each of them. If you prompt them about estate planning, for example, they will address topics such as inheritance tax, trusts, discretionary trusts, family investment companies, life insurance and more. For each of these areas they provide a fairly accurate picture of the available strategies, along with suggestions and practical ideas, such as the importance of addressing these issues early in life. And all of that arrives in a matter of seconds, which is pretty mind-blowing.
John Jopp, head of front office, LGT Wealth Management
The speed and clarity with which it set out the key considerations. The high-level guidance and the rationale behind it covered the main issues I would expect to see addressed. When challenged on specific points, such as investment return assumptions, it also gave some thoughtful responses. It was particularly useful in suggesting areas to explore in more depth, although, as ever, discipline and experience is needed to avoid disappearing too far down particular rabbit holes!
Nathan Valbonesi, associate director, Weatherbys Bank
What impressed me most was the output itself. From Claude I got a document formatted to a professional level (24 pages at first glance). It structured everything clearly: here’s your problem, here are the areas to focus on. The prioritised action plan at the end was impressive. Normally we do four bullet points, but this had charts, tables, and a level of detail that made me think: ‘Blimey, I could learn from this.’
[See also: Introducing Spear’s Magazine: Issue 99]
Where did the LLMs fall short?
Francesco Grosoli

The limitations are clear: the LLMs and the prompter can make mistakes. When you prompt wrongly or in an incomplete way, the answers can be misleading. The risks are what data is used and what prompt you give. If you were to go it alone and take the advice you are given by an LLM without consulting a human as well, you would certainly lose out. There’s no soul, no feeling, no emotion – what you get is a very detailed and cold approach. Sometimes, humans use the old principle of gut feeling. Clearly, you cannot build a strategy on gut feelings, but that is where human intervention makes the difference.
John Jopp
The initial responses were reasonable from a high-level perspective, but some of the figures did require challenge, which often led to meaningful revisions. For example, the growth assumptions for an investment portfolio did not take account of income withdrawals, leading to a fairly large difference in 10, 20 and 30-year return projections. But its bigger limitation isn’t technical. Advice to UHNW clients is not just about arriving at a mathematically correct answer. It is about judgement, trust, family context and understanding what really matters to that person over the long term.
Nathan Valbonesi
There are clear limitations. Some details just aren’t captured: there was no mention of NS&I, offshore bonds, loan trusts, annuities or gilts and linkers. The answers can feel like a shotgun approach, giving a bit of everything without depth, which can increase the burden on clients and advisers through information overload. AI doesn’t consider context, subtext or body language, and it doesn’t bring the human into the process, so it can’t explain why things matter or why you would take a certain approach. It also relies heavily on the quality of the prompt, which certain users may not get right.
[See also: From Gstaad Guy to Supersnake: how social media finally went ultra-high-net-worth]
Where do you see practical opportunities?
Francesco Grosoli
Clearly, clients now have the possibility to get fast, detailed, accurate information about their wealth. Both clients and advisers will have access to the same data and at the same speed. This means the adviser needs to step up and adopt AI, but it also frees up time to spend with clients, analysing, advising and understanding them. If you embrace it, AI becomes an enhancement, almost a superpower, helping advisers absorb information quickly and provide more holistic guidance.
John Jopp

It can provide a strong structure and a useful checklist of issues for consideration, especially when situations become more complex – for example, if a client is considering relocating abroad. It can also act as a form of check and challenge, by helping test advice that has already been given. That should be valuable for both clients and advisers, ultimately building confidence that the advice received is properly thought through. The speed at which it can do this should also create significant time savings, allowing clients and advisers to get to the key issues much faster and focus on the conversations that matter most.
Nathan Valbonesi
What I find most practical is how it prepares clients to engage meaningfully. They leave with the right language and can ask the right questions in their direct meetings with a wealth manager. It’s a good way to calibrate who is strong in the industry. For advisers, it’s huge, as it allows us to scale, focus on what’s important, filter out the noise and work out priorities. We’re not accountants: AI helps us concentrate on what truly matters.
[See also: Billionaires dominate Donald Trump’s science and technology committee]
What risks do you see?
Francesco Grosoli
If the end client is left to figure things out on their own, I’m not sure we are ready to operate without human interaction – and I’m not sure we ever will be. AI takes a purely technical approach. It cannot take into consideration everything happening in the world, such as geopolitical crises or other complex real-life contexts. That is why human judgement remains essential, to interpret the broader picture and guide decisions safely.
John Jopp
False confidence. LLMs are often very good at presenting answers in a fluent and persuasive way, even when some of the underlying assumptions are weak or incomplete. In a wealth management context, that can be particularly dangerous. There is also the risk that it fails to capture the emotional and personal drivers that shape decision-making for clients. For some, tax may be the overriding concern, for others, it may be philanthropy, legacy, sustainability or family considerations. If those motivations are not properly understood, the advice may look sensible on paper without being right for that person.
Nathan Valbonesi

The main risk is that clients or advisers might treat AI as definitive guidance rather than a tool. It can’t understand the broader context, individual goals or any subtleties in a client’s situation, and it doesn’t include the human relationship element. Limited disclaimers mean clients might act on outputs without checking with an adviser. There’s a danger of misjudging priorities, missing what really matters, or misinterpreting outputs, especially if users rely on AI without proper human oversight.
This article first appeared in Spear’s Magazine Issue 99. Click here to subscribe






