European fintech executives claim financial services are struggling to implement artificial intelligence (AI) effectively, even as evidence mounts the technology could boost productivity and reduce costs.
Concerns about job loss, regulatory hurdles and data privacy laws, and institutional inertia are key reasons bankers hesitate to adopt AI systems like ChatGPT fully. “The big banks will definitely not adopt [the technology] as quickly as any of the fintech,” said Tom Blomfield, co-founder of Monzo and partner at Y Combinator. However, generative AI can “make banks more efficient and able to provide the same products at a cheaper cost.”
A Capgemini study found that only 6% of retail banks are ready to implement AI at scale. Yet, McKinsey estimates AI could add up to $340 billion annually to the global banking sector, around 4.7% of total industry revenues.
The technology’s ability to quickly analyze vast amounts of data has the potential to cut costs significantly. However, fears of job losses persist. “People don’t understand that it’s there as a productivity tool,” said Nasir Zubairi, CEO of the Luxembourg House of Financial Technology. He emphasized that traditional banks are “fundamentally analogue by design,” making digital transformation challenging.
Zubairi highlighted an example at the Financial Times’ TNW tech conference, where an institution rejected a customized AI model for money laundering checks that could save up to €450,000 annually in salaries. “People don’t like firing people,” he added, noting that managers might feel their power is threatened if they have to cut jobs.
Central banks have been urged to “raise their game” with AI, according to the Bank for International Settlements. While AI can boost productivity, it also carries risks like providing inaccurate information and vulnerability to hacking.
A significant issue with large language models, the technology behind most generative AI products, is their tendency to “hallucinate” and present inaccuracies as facts. They can also generate data based on their training, raising concerns about sensitive information.
“There’s not necessarily a rejection of [AI], but there is hesitancy,” said Wincie Wong, head of digital at NatWest, advocating for a thorough assessment of AI’s risks and ethics before deployment. She stressed the importance of safeguarding customer data.
AI tools have significantly disrupted customer service, with bots capable of human-like conversations. Digital banks have used machine learning for over a decade to manage online inquiries, often directing clients to live agents. However, LLM-powered bots can handle a wider range of queries and make decisions, reducing the need for human intervention.
Blomfield believes AI will eliminate most customer service jobs within five years. Many banks and fintechs, including Klarna and NatWest, already use AI chatbots. NatWest’s Wong noted that their AI service, Cora, received over 11 million chats last year, with more than half requiring no human intervention. Swedish fintech Klarna reported that its AI assistant could handle the work of 700 customer service workers, saving $40 million in costs annually.
Wong emphasized the importance of training AI models to understand nuances, such as the emotional undertones in a change of address request. “Understanding the psychology behind it was really important,” she said.
Banks must also navigate strict compliance rules and regulatory environments when deploying AI. In 2022, a Dutch court ruled in favor of neobank Bunq, allowing it to use AI for money-laundering checks. Recently, restrictions on German fintech N26 were lifted after it improved its scrutiny measures, reducing instances of criminal activity by 90%.
“If we don’t embrace AI in the industry, then in a few years, we will no longer be here,” said Carina Kozole, chief risk officer at N26. “We need to show the advantages and how we can grow compliant if we’re using AI.”