Roundtable: ethical AI in finance

Photo by Alex Knight on Unsplash

AI is in a rapid phase of development, and as it becomes increasingly used in finance we need to look beyond the technical capabilities to what this means for financial health, economic inequality and citizen power. It’s early days for this controversial range of technologies, but the Finance Innovation Lab’s community already has lots to say:

What is AI?

Artificial Intelligence (AI) encompasses a range of technologies that enable machines to think and act with the aim of achieving a specific goal, in ways that can be thought of as similar to, or even exceeding, human capabilities. AI lies behind the features of smart apps that many of us use, from Netflix’s video recommendation engine, to Uber’s ability to estimate drivers’ pick up time.

What does AI look like in finance?

AI is increasingly being adopted in financial business activities, from fraud detection, to risk management, trading, lending to investment advice. Like many technologies, it could increase the speed and reduce the cost of financial services. Some celebrate AI for its potential to extend the provision of services to a wider range of people. Robo-advice, for example, promises to fill the significant advice gap in the UK, according to many proponents.

What does AI mean for financial health, economic inequality and citizen power?

Some see AI as an opportunity to improve people’s financial health and put people in greater control of their finances:

AI could create the opportunity for everyone to have an expert personal finance manager. It could help us understand our financial habits and find the best products and services – all the while saving us time and effort. At Aelm, we’re developing a product just like this. We want to help people improve their financial wellbeing using an AI engine that looks at your spending, accounts and life goals to figure out the best way to manage your finances.

Maysam Rizvi, Aelm and Financial Health Fellowship Alum

We should not look at the ethics of AI use in isolation but in the context of ethics in the financial services industry as a whole. There is plenty of evidence to suggest that many firms prioritise profit over the best outcomes for their customers.

In a recent paper, I illustrated how firms can exploit the behavioural traits inherent in us all to retain customers and charge them more. Tactics include, for example, multiple products and bewildering features, complex and hidden prices, charging existing customers more – much more – than new customers. As a result, fearful and confused, consumers stick to known brands and are reluctant to shop around and switch. 

In that paper, I suggested that AI is ideally suited to cutting through the complexity, automating shopping around and switching to deliver bespoke, regularly updated best deals and drive competition. Certainly there are issues to address, but the potential to put market power back into consumers’ hands has to be exciting! 

 Jonquil Lowe, The Open University

However, others are concerned that AI may further entrench existing inequalities:

Digital interfaces and apps might come in many colours and designs, but if financial institutions begin to converge on a common set of options, the sense of wide choice may be an illusion. Without an intermediary connection to company management via flexible or empathetic frontline customer service staff, users may find themselves feeling even more like passive acceptors of services from distant, unfathomable financial gods.

Brett Scott, Senior Fellow, Finance Innovation Lab

As an organisation which works on the right to privacy, we are primarily concerned about current and future applications of AI that are designed for the following purposes: (1) to identify and track individuals; (2) to predict or evaluate individuals or groups and their behaviour; (3) to automatically make or feed into consequential decisions about people or their environment; and (4) to generate, collect and share data. 

[For example,] AI systems can contribute to the perpetuation of existing injustices and inequalities in society through inbuilt bias and discrimination. Machine learning can unintentionally, indirectly, and often unknowingly recreate discrimination from past data. Since profiling using machine learning can create uncannily personal insights, there is a risk of it being used against those who are already marginalised.  

Privacy International, Submission of Evidence to the House of Lords Select Committee on Artificial Intelligence

A way forward

At the Finance Innovation Lab, we want to make sure the huge potential of AI is used to support people to create a better financial system – one that serves people and planet.

On 15 February, we’ll bring together experts from banking, fintech, civil society and policy making to share knowledge and generate ideas about how to ensure the use of AI in finances benefits customers and citizens.

Keep an eye on our Twitter for live updates and watch out for our follow-up report after the event! Let us know your view too @thefinancelab.

Want to find out more about the ethical use of AI in finance? We recommend these: