The Finance Innovation Lab and The Financial Inclusion Centre have produced a report examining the potential impact of Artificial Intelligence (AI) in financial markets and services. AI, along with innovations in the broader tech and big data sectors, could bring significant benefits to the financial sector. But, AI also creates major risks in financial markets and services. We argue for a precautionary approach to regulating AI in financial markets and services to ensure these risks are managed.
The report can be found here: FIL FIC AI in Financial Services November 2023 Final Report
Summary
How to address the risks presented by Artificial Intelligence (AI) is high on the agenda of governments around the world. The launch of ChatGPT-4, and the wave of calls from industry leaders for regulation to curb potentially catastrophic outcomes, have sparked global interest in understanding the use and risks of AI. This has ranged from concerns about existential threats to humanity’s existence, international safety, replacing jobs and increasing the digital automation of everyday life. Notably lacking from this debate is a discussion around the impact of AI on arguably the UK’s most influential industry, financial services.
The financial services industry plays an important role in a wide range of industries and services. It affects every citizen’s life and is a major contributor to the UK’s total economic output. But it also poses a major risk, as demonstrated by the catastrophic consequences for the wider economy in the aftermath of the Global Financial Crisis of 2007–09.
The government has set out its ambition for a pro-innovation approach to AI regulation, one that aligns with its objective to make the UK a global leader in fintech and enhance the competitiveness and growth of the financial sector. Financial services is an industry with data at its core, and as such is ripe for utilising AI to provide insights and analysis and to automate decision-making. While this could bring some benefits, there are significant risks, some known and others unknown, that need to be assessed by the regulators. These include risks to financial stability, consumer protection and the net-zero target – and they are too significant to ignore.
AI presents the next step in the use of data in financial services and is already being
applied by some financial firms, who are also exploring its increased adoption at a
significant pace. The government should focus its attention towards what’s happening with AI today and how these risks may evolve. The reality is that the immediate risk from AI is not a robot apocalypse but a new financial crash, one with potentially catastrophic consequences. It is for this reason that we believe a precautionary regulatory approach to AI, one that can get ahead of innovation, is now required, along with a risk-based regulatory framework that could prevent these risks from emerging.
In addition, the UK should be making a much greater effort to include voices from civil society organisations and the public into the debate about the future of AI. In no sector is this more important than in financial services, where the variety of circumstances requires a multitude of solutions, many of which require specialised and lived experience. Without this type of society-wide engagement, there is a risk that the development, introduction and monitoring of AI processes in financial services will not serve society. With technology as powerful as AI, and an industry as influential as financial services, ensuring that the public interest comes first should be central to the government’s approach to AI.