Responsible AI: making artificial intelligence more human

Having your credit limit reduced drastically from €9,700 to €3,400 is a huge blow to anyone’s household budget. Terrifying as it might seem, it’s actually one of the earliest examples of a biased algorithm, based on someone's background or living area. How? Kevin Johnson often shopped at locations ‘where the customer base was expected to have a poor credit repayment history’. Thanks to financial profiling, the credit card company’s AI dubbed him a liability, despite him being a homeowner and running a successful public relations firm.

This disturbing example of biassed behavior headlined the news back in 2009. A more recent example involves Amazon’s AI recruitment efforts. Ironically, while the global AI community was discussing AI’s future and how to manage it responsibly, news hit the streets that Amazon had given up on its AI that should pass on potential candidates to the recruitment department, based on analytics of CVs from new hires in the past 10 years. However, the AI hiring tool used data from a period in which men dominated the tech industry. That’s why ‘Amazon found its algorithm discriminated against female applicants’ and terminated the program, despite it being developed to counter human bias. 

There are many considerations needed to build and use AI responsibly in business and in broader society, according to our Global Managing Director Conversational AI, Laetitia Cailleteau. She touches upon unconscious bias, responsible AI tooling and AI governance. 

Laetitia Cailleteau - Accenture Conversational AI global lead

Taming AI’s enormous potential

Even though 78 percent of business leaders expect AI to disrupt their industry in the coming 10 years, a dazzling 88 percent of these leaders ‘do not have confidence in AI-based decisions and outputs’. Clearly, when talking about AI, you automatically come to a point where ethics are being discussed. 

“One of the key challenges, we see that companies want to do the right thing, but they don’t have a clear reference framework for what ethical AI means in their use case or context, or guidance on how they can implement it. This holds them back from implementing AI solutions,” Laetitia explains. 

To watch this video you need to accept marketing cookies

Organizations worry about potential unintended consequences and the impact they may have on their brand, customers or workforce. “The real-world application of AI has presented new or accelerated challenges to the ethical use of technology that we didn’t see 10 years ago. We are starting to see the emergence of solutions to tackle these challenges, including our own responsible AI offering and tools. Everyone is learning as they are doing it and in different contexts.”

Using unbiased data

Algorithms do what they’re taught; unfortunately, some are fed prejudices and unethical biases by using old, biased data. To build algorithms responsibly, we need to pay close attention to potential discrimination or unintended yet harmful consequences.

For Laetitia, ensuring that an AI algorithm and its underlying data is as unbiased and representative as possible is essential. “If you use AI for credit-scoring based upon historical data, then you must ensure that this data is not biased. As we know, historically, more men have applied for mortgages than women, so you must ensure this historical data does not result in bias against women. If you train your algorithm based upon data from an affluent city, you would need to ensure that you are not biased against people from less wealthy areas.” 

“There are a number of things you must look at to ensure that you have a transparent and fair outcome, but it also depends on the context. For example, checking for unwanted bias against women is valid in a credit score context, but it is less relevant in marketing for clothes, where you would want the system to know that you are a woman to ensure the products fit you.”

To ensure that companies, governments and other organizations working with datasets and AI algorithms can limit bias, Laetitia believes that there are questions that you can ask and techniques you can employ. “I think the easy example is the male-female one, as typically, people have data around your sex. In terms of the credit-scoring example, did you take this data in your training set? Did you try to balance it to train your algorithm? Say in history, there’s an 80-20 male to female split in taking out mortgages. Do I ignore this data, or try to balance it? You need to find techniques to make sure that there is no bias.” 

Beware of unconscious bias

Unconscious bias is a more difficult problem to tackle. Where the norms of society have embedded bias into every one of us, it’s becoming the common belief we must be vigilant in not transferring those inherent human biases over into AI systems. But that’s easier said than done. “There are millions of biases, like sex, socioeconomic, age. The history of bias is found in our brain due to the fight or flight instincts, as it simplifies information. As we grow up, many factors—such as education and upbringing—change us, but the brain simplifies this. If you have a bad experience with a certain type of people, you may be biased against them in the future.” 

You must be aware of your bias. You must try and counter these

“You must be aware of your bias. And there are many exercises we can do to find out our biases, to detect our brain’s biases from these simplifications. Once you are aware of these, you have to try and counter these.”

Building responsible AI is about building with transparency, in good conscience and care: “There is a quote that says: ‘The best way to shape your future is to create it.’ I think if you aren't creating your future, you will miss out on the tremendous potential of this technology. We still need to do it with a lot of due diligence.”

Neon heart - Responsible AI: making AI more human - Accenture

How to implement and govern responsible AI?

But, what does ethics actually mean? And what are its values? The problem is that there’s “no universal definition of ethics across different industries”.

It is important for companies to be clear on their ethical values and principles for their business, and create the right governance to manage day to day AI projects to make sure they are aligned with those values in an auditable and traceable way.” 

The best way to shape your future is to create it.

However, it isn’t just about building responsible AI with the company values in mind but building for a consumer base that expects ethical behavior across the board. We suggest adopting the fire warden model to govern AI. It’s an agile approach to quickly acting upon unwanted AI behavior. The three guiding principles for this model are: select ‘fire wardens’ who can sound the alarm when needed; embed regulatory expertise within your company to look at potential consequences from all angles; and welcome false alarms, as you’ll need to develop your personal instinct to take on issues at the root.

Human-machine interaction, sceptical - Responsible AI: making AI more human - Accenture

We already see that AI frontrunners set up new jobs to better guide the development of their AI. Also, 47 percent of business leaders believe that AI should be made more explainable and more human. By studying human-machine interaction and collaboration, we can learn how we can complement each other is the best possible way. 

As Laetitia puts it, “I think we must be transparent about our internal kitchen when it comes to AI, as the end consumer will probably ask companies, or won’t buy from brands unless they are transparent going forward…So I think there is no choice but to be part of it and it will even become a competitive advantage going forward.”

Set up a responsible AI governance model now. Many questions might remain unanswered when reading our article. You know where to find us! 

This article is partly based on an interview by World Summit AI with Laetitia Cailleteau. You can find the full interview on World Summit AI's blog.

Author: Accenture the Netherlands