
The FCA is seeking views from firms about how its new live AI testing service can help them to deploy safe and responsible AI.
The live testing service would be a new component of the FCA’s AI Lab, which has been supporting firms with the development and deployment of AI and would help to fill a testing gap slowing firms’ adoption of AI.
The live testing service would allow firms to collaborate with the FCA while they check that their new AI tools are ready to be used. It would also provide the FCA with intelligence to better understand how AI may impact UK financial markets.
The live testing service will provide regulatory support to firms who are ready to deploy consumer or market-facing AI models. The proposed live testing service would run for 12 to 18 months, with plans to launch in September 2025.
Jessica Rusu, the FCA’s chief data, intelligence and information officer, said: "Under our new strategy, we’ve committed to being increasingly tech positive to support growth. We want financial firms and their customers to benefit from AI, so we’re providing a safe space to test how they plan to use it."
Speaking today at the Innovate Finance Global Summit, Rusu commented: "The adoption of safe and responsible AI by the financial services industry plays a key role in supporting growth, which is why we developed the AI Lab to bring together our innovation services.
"On the policy front – we believe our existing frameworks like the SMCR and Consumer Duty give us enough regulatory bite that we don’t need to write new rules for AI.
"And while we know from our joint FCA-Bank of England AI survey that 75% firms have already adopted some form of AI, most uses cases are for internal applications, rather than things that could directly benefit consumers and markets.
"FCA AI Live Testing enables generative AI model testing in partnership between firms and supervisors, to develop shared understanding and explore evaluation methods that will facilitate the responsible deployment of AI in UK financial markets, including consumer-facing applications.
"Through this testing, firms will be able to build confidence in the performance of the AI they are developing while receiving regulatory support and comfort.
"Our goal is to give firms the confidence to invest in AI in a way that drives growth and delivers positive outcomes for consumers and markets while at the same time offering us insights into what is needed to design and deploy responsible AI."