Sustainable hybrid financial services models to the benefit of customers

People say that Artificial Intelligence (AI) is the foundation of future business. But does it fit financial services? Does it support financial services’ business models of the future? And how can AI create sustainable hybrid services that meet emerging customer demands, as well as regulatory and ethical requirements in today’s fast-changing competitive landscape?
article-5076-hybrid-financial-service-header.jpg

Earlier this year, our Financial Services group in the Netherlands organized a roundtable on the topic 'AI in financial planning and investment advice'. This article is the first in a series of three, where we discuss the outcomes of the convention circulating the topic 'AI-based financial services to the benefit of customers'.

It is no secret that AI enables financial service providers to elevate customer experiences and offer innovative ways of serving their customers. To stay competitive, traditional financial services providers have to revisit and transform their existing service models. Our study showed that 61 percent of customers had changed service providers because of poor service. 

By using AI to deal with customer data, financial services providers can offer higher quality services that better meet their customers’ needs and prevents switching behavior. Imagine a large bank offering a virtual assistant that offers real-time customer support or investment advice based on machine learning algorithms and big data. Or think about an insurer using a chatbot to settle claims within a matter of seconds.

From a quality perspective, customers would want to have AI-based services and that AI could be the driver of customer loyalty. But is this true? In this article, we will highlight the roundtable’s discussions and consensus on the role of AI in financial services and how regulations could evolve to guarantee (ethical) compliance and duty of care.

Augmenting human capabilities using artificial intelligence

In today’s 24/7 economy, customers are increasingly demanding personalized services in the palm of their hand. Surfing on the wave of digital transformation, AI enables smart customer engagement, where financial services providers can meet these new demands by offering continuously available, easily accessible, low-cost, and omnichannel financial services. 

Obtaining customers’ trust and loyalty requires more than just a digital and automated approach. Interestingly, customers seem to crave a human touch and believe in higher added value in portfolios where financial services providers offer a combination of digital and traditional services. Simply put, full automation does not lead to more customer loyalty and trust towards financial services providers.

Moreover, customers understand the value of data and are willing to share it to receive benefits in return. For instance, 38 percent of all customers⁠—­against 46 percent of Generation Y customers⁠—would use digital platforms to take investment advice, while 78 percent of customers are open to receiving automated support about investment advice. 

Additionally, 73 percent of customers demand more personalized investment advice in return for sharing data; while 59 percent of customers did not mind the channel they used to interact with their provider if the experience is easy, seamless and effective. Finally, key drivers for customer trust were found to be personal service provider-client relationships, next to acting in the best interest of the customer and data protection.

The roundtable’s participants agreed that hybrid service models are currently setting the standard from a customer service portfolio perspective. These paradoxical customer needs force financial services providers to ‘combine the best of both worlds’. In other words, AI mainly helps augment human capabilities, then a human touch completes these automated financial advisory services.

When the roundtable discussed its view on what hybrid service models are according to contemporary standards, it became clear that there is a main perceptual and emotional difference between delivering an advice through an automated channel and a human advisor facing customers directly when giving an advice. 

From a service delivery perspective, the roundtable concluded that hybrid service models produce the highest quality advice to customers based on existing standards and customer demands. Advice with the highest possible quality ideally emerges when algorithms are in the lead and when data is leveraged efficiently. Topping off an automated advice with a human’s touch complements the high-quality advice, which is, according to the roundtable participants, still key in today’s perceptual and emotional customer needs.

The need for responsible AI

However, several questions emerged on the subject matter and the way forward. When AI takes over the ‘creation’ of an advice, how do we monitor the ethical compliance of the algorithms in action? Do we create equivalents of ‘banking oaths’ for robots, for example basing algorithms on regulator approved ones and design principles? Or does it remain the responsibility of humans to analyze, explain or even repeat the (way of producing the) output by the algorithm? In case of the latter, would there still be an advantage of using AI then?

Current legal frameworks demand service providers to ensure transparency (explainability) and handling in the customers’ best interest (responsibility). How does this mechanism work in hybrid service models?

Imagine a married couple applying for a mortgage. Algorithms make an automated decision based on data, such as transaction history, and detect that the husband has conducted multiple transactions at shady casinos. Implying a gambling addiction, this could raise a red flag in terms of credit score, resulting in a negative mortgage advice. Is this advice produced responsibly? Based on the consulted data: yes. 

Transaction data and credit card payments are acceptable sources and good indicators to determine if someone is eligible for a mortgage. But how are we going to explain the outcome to the couple? Moreover, if a human advisor investigates the husband’s social media accounts to determine his lifestyle and credit score, would we consider the (way of producing the) advice conscientious and responsible?? Unlikely. 

In other words, there’s a hazard in innovating on the edge of what is allowed when it comes to both traditional services and AI-based financial services. An advice produced in the customer’s best interest is not always by default an advice that is responsible and explainable. Questions remain on what is allowable when producing an advice using AI, and where lie the differences between acceptability of AI-based advice and human-based advice?

Where lies the difference between acceptability of AI-based advice and human-based advice?

The roundtable did not find a conclusive answer. One thought stream stated that advice is of the highest quality–meaning, in the customer’s best interest–when the algorithms are tested on output in regulatory sandboxes. In this case, explainability is mainly focused on input data instead of algorithms and decision trees. 

The opposing thought stream, however, found that transparency and explainability of both algorithms and data used should be the basis for sustainable automated financial advice. Some participants further stated that the transparency of algorithms and data used must be a priority within an organization’s hierarchy. This means that even board members should be able to explain algorithms and automated decisions offered to customers.

Moving towards a sustainable future

As AI continues to develop, the possibilities to transform financial services provisions are increasing. To this end, the roundtable agreed that a combination of both thought streams described in the previous paragraph is a road to take for a sustainable future. First, it is of great importance that the legal frameworks are up to date and that the laid foundation ensures responsibility, explainability, stability, and sustainability. Second, where data-centricity will become the core competency of the new market players, the quality of financial services increases, and future competition will be fueled by output of services. 

In this new standard, how do we ensure human-centered, fair, and honest financial services provision? Moreover, how can we maintain financial services’ transparency while holding the industry accountable for its actions? As the mortgage example shows, there are still grey areas with no conclusive answers.

This is where the regulators come in. Existing regulations do not account for norms in data collection and usage in decision-making, for example, by means of algorithms. However, the standard of financial services is changing as we speak, mainly because of AI-based services. Some members of the roundtable stated that regulations are following the developments and therefore, is complying with the new standard as customers have the right to the highest quality products and services. 

The other members opposed by stating that there is a sense of urgency for ensuring a sustainable future for customers, service providers, and regulators to avoid unpaved roads. From this perspective, co-creation between regulators and financial services providers on AI-proof regulatory frameworks is essential in today’s ecosystem.

Co-creation between regulators and financial service providers on AI-proof regulatory frameworks is essential in today’s ecosystem.

Together, we can make a difference! 

During the roundtable, the possible outcomes of increased quality of AI-based financial services and its corresponding ethical challenges were discussed. We found consensus on the need to innovate and offer AI-based financial services to improve the quality of services to the benefit of the customers. 

Moreover, hybrid service levels are currently seen as the standard in financial services provisions, and service providers are innovating around augmenting human capabilities with AI by combining the best of both worlds. 

We believe that the main challenges of AI-based financial services are to be found in other areas than in technology. Where hybrid models are now the general standard, customers demand high-quality advice with a human-centric focus. 

We believe that the main challenges of AI-based financial services are to be found in other areas than in technology

Consequently, you could say that the hybrid model is paradoxically affecting the business case for financial services with a need for investing in AI to offer high-quality services while focusing on human advisory capabilities. Both financial services providers and regulators increasingly deal with this paradoxical development. 

Innovative approaches need to be considered to meet these needs, while creating a sustainable foundation that ensures high-quality outputs in the customers’ best interest. Adopting a co-creation approach, financial services providers and regulators together possess all the ingredients to innovate sustainably and transform the financial services sector for a better future!

Authors: Axel Haenen Julia Jessen