The financial advice sector is awash with discussions about the use of artificial intelligence (AI). It’s a hot topic, with OpenAI, Google and Elon Musk all constantly making headlines – and not just for AI-related reasons.
But beyond the hype, AI has become an essential tool for advisers, platforms, investment managers and providers. The Lang Cat’s recent State of the Advice Nation (SOTAN) report revealed that most professionals no longer see AI as a threat but are embracing it more than ever.
We’ll continue to track this trend, but it’s fair to say usage numbers are only going in one direction.
Regulatory scrutiny
The FCA and Information Commissioner’s Office have announced they are hosting a roundtable on AI adoption and innovation. This not only highlights the growing interest and investment in AI technologies among advice firms, but also the intention to bring in stricter regulation for this emerging part of the market.
According to a 2024 joint report by the Bank of England and the FCA, 75% of firms are already using AI, with another 10% planning to adopt it within the next three years. Our own research shows that just under half of respondents will be using AI by the end of this year, a significant increase from last year.
So, while there’s some difference in terms of numbers, the trend is evident. And the growth of AI inevitably points to the need for greater scrutiny.
When it comes to everyday use, firms must ensure that AI technologies are used ethically and responsibly
AI adoption also comes with its challenges. Our research indicates a significant gap between the confidence of UK financial services business leaders in their readiness for AI and their actual plans for effective implementation. When it comes to everyday use, firms must ensure that AI technologies are used ethically and responsibly.
Transparency and accountability are crucial to maintaining trust, and advice firms need to be able to explain their use of AI to clients who may be sceptical. Striking a balance between leveraging AI for efficiency and maintaining the personal touch that clients expect is tricky but essential.
The potential risks
The late Professor Stephen Hawking once said: “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
He was talking in a more (hopefully) theoretical way, but the sentiment remains when applied to financial services: unless we consider the potential risks and how to mitigate them, the industry could come unstuck.
While AI offers numerous benefits, it also presents several risks that must be carefully managed. One of the primary concerns is the lack of transparency in AI systems, particularly in deep learning models that can be complex and difficult to interpret.
While AI offers numerous benefits, it also presents several risks that must be carefully managed
This opacity can obscure the decision-making process for advisers and clients, potentially leading to distrust and resistance to adopting the tech in future.
Bias and discrimination are also significant risks. AI systems are well known for instances of inadvertently perpetuating or amplifying favouritisms due to biased training data or algorithmic design. To minimise this and ensure fairness, it is crucial to only use specific and tightly controlled datasets relevant to the task and industry at hand.
As AI technologies are collecting and analysing large amounts of personal data, privacy and data security risks also need mitigating. Hackers and malicious actors can easily harness the power of AI to develop advanced cyberattacks, bypass security measures and exploit system vulnerabilities.
So, we must all push for strict data protection regulations and safe data handling practices, and hold the tech firms we are partnering with to account.
While some of this will be covered under GDPR and similar rules, I can see even stricter clauses being added soon.
All this comes back to the dilemmas that may arise when attempting to instil moral and ethical values in AI systems, especially in a decision-making context. AI technology developers must prioritise the ethical implications to avoid negative societal impacts.
The benefits are significant, but we need to be alive to the challenges around privacy, bias, client perception and more
What next?
The adoption of AI in advice is clearly accelerating. This is driven by the need for greater efficiency, improved client experiences and, probably most importantly, competitive advantage. The benefits are significant, but we need to be alive to the challenges around privacy, bias, client perception and more, plus adopt an overarching approach of ‘good ethics’.
The pace of change and the need to do your homework on all this emerging tech is why we run our Catwalk event, to create a way for advice professionals, providers and tech firms to discuss a way through and ask some of these tough questions along the way.
I’ve said it before, and I hope others would agree: ultimately, it is all of our responsibilities to ensure that AI is developed and used in a way that is fair, transparent, accountable and respectful of human values.
Ben Hammond is managing director, consulting and insight, at the lang cat