Artificial intelligence will empower mankind, enabling innovative technologies and solutions we can only dream of today. But that power must be balanced by responsibility, transparency, and fairness. We now have a unique opportunity to determine what the human side of the equation should look like.

The development of AI is accelerating. In just a few short years, breakthroughs that were decades in the making have come and gone, vanishing in the rearview mirror of a technology that seems destined to determine the course of the 21st century. Blazing past major milestones has become almost routine and we’ve still only scratched the surface. We have no idea what the full extent of artificial intelligence will look like. Only by experimenting will we learn what the limits of its potential are. But we must also take the time to teach.

For all its sophistication and promise, AI is still just a program.  A very advanced set of algorithms, trained by humans to handle and make sense of enormous datasets – but a program nonetheless.

It doesn’t have a conscience. It won’t spontaneously generate a code of ethics or adopt human values. No, for artificial intelligence to know right from wrong, it needs to be taught. And it’s extremely important that we do so proactively, especially considering the vital role AI will play in supporting future economic and social growth.

Creating artificial intelligence everybody can trust

Trust gives human interactions meaning. It helps us separate truth from fiction and distinguish good intentions from bad. Where do you go when you need solid advice? You go to somebody you can trust.

Now, this is by no means a trivial matter. As businesses continue to expand their use of artificial intelligence, consumers will increasingly interact with digital agents. They will need to be able to put their trust in these AI systems when they apply for health insurance, student loans or a mortgage. But establishing that trust is easier said than done. There are significant challenges that your business will have to address on the way to creating trustworthy, responsible AI.

1. Eliminating unconscious bias in data

Artificial intelligence is great at handling massive datasets. But recognizing bias in the data? Not so much. When researchers at the University of Virginia tested new image recognition software, they noticed that it identified people pictured in kitchens as women – even if they were male. Worried that they’d contaminated the algorithm with their own unconscious bias, they decided to investigate their datasets first. The results were illuminating: the data itself already displayed a predictable gender bias on its own which was only amplified further by the process of training the software.

This clearly demonstrates that hidden biases can sow the seeds of systematic discrimination, leading autonomous systems to make decisions that unfairly disadvantage certain groups. And to make matters worse, the AI training process only exacerbates any bias present. The only way to combat this is by identifying and eliminating unconscious bias in your data before your AI interacts with the public.

We've developed a tool to help pinpoint and avoid sources of bias in datasets, effectively making it easier to raise your AI well. For the same reasons, we test our solutions extensively and change our own datasets and algorithms wherever necessary.

Image: Raising responsible AI - Robot reading code - by Accenture

2. Delivering transparency by design

Another inherently tricky aspect of artificial intelligence is the challenge of ensuring transparency. Compared to AI, the algorithms of the past were relatively simple. In most cases, researchers and data scientists were still able to determine how their models produced the answers they came up with.

With the advent of deep learning and neural networks, however, taking a peek under the hood has gotten much more difficult. This lack of transparency can have far-reaching consequences of its own: in the US, black-box risk assessment tools are being used in the justice system to make sentencing recommendations. In more than one case, this has led to defendants receiving longer sentences based on the outputs of algorithms they were not allowed to review.

Naturally, this doesn’t play well with the public. As consumers and citizens, we want businesses that use AI to deliver products, services, and information to provide us with an appropriate level of transparency.

Image: Transparency in raising your AI responsibly - by Accenture

3. Clarifying responsibility for autonomous decisions

Artificial intelligence also muddies the waters of corporate responsibility and legal liability. When an algorithm makes a decision based on a certain set of inputs, who is responsible for the outcome? Take self-driving cars, for example. Autonomous vehicles need to react quickly and decisively in complex situations, balancing the safety of the vehicle’s occupants against the safety of other drivers and bystanders. But if one of those decisions leads to an accident, who should be held responsible?

Last year, Audi sent a strong signal to consumers by assuming full responsibility for any accidents that occurred while their autopilot system is in use. This not only proves that Audi believes in their AI-based system but also demonstrates their willingness to put their money where their mouth is – a double shot of trust.

4. Defining ethics for artificial intelligence

Unlike humans, artificial intelligence doesn’t have the gift of intuition. When faced with a situation it hasn’t encountered before or doesn’t have the know-how or expertise to handle, AI can’t fall back on common sense, street smarts or anything quite so organic. It simply follows the rules we’ve defined for it. That means it’s up to us to instill our values into the smart systems we surround ourselves with. If you want your AI to follow a code of ethics, you’ll have to provide it yourself.

Unlike humans, artificial intelligence doesn’t have the gift of intuition. [...] If you want your AI to follow a code of ethics, you’ll have to provide it yourself.

Not harming humans is an easy one, of course, although Asimov’s three laws are not without loopholes. That said, what other ethical considerations should an artificial intelligence in your employ follow? How will you translate the values that drive your business into the values that guide your AI? And how will you make sure it follows them effectively? These are not easy questions to answer, but they should certainly be on your mind as you design and develop your AI solutions.

5. Engaging with legislators and establishing strong values

Innovation always precedes legislation. AI is no different. Advances in the field are moving faster than any government can respond to, which means you can’t rely on the law to set clear boundaries. When it comes to ethical considerations, you’ll have to rely on the strength of your corporate values instead.

Image: Raising responsible AI - AI brain - by Accenture

This doesn’t mean you should hold off on experimenting. On the contrary: learning requires doing. That hasn’t changed. Lack of regulatory oversight simply means you must tread carefully and be mindful of your ethical obligations. But it also represents an opportunity to engage with legislators and become a guiding force in your industry. In the UK and elsewhere, Accenture has entered talks with the government for precisely this reason. We believe that pragmatism and ethics go hand in hand and are committed to adding our voice to the conversation. Your business should be part of it as well.

6. Safeguarding privacy and security with responsible AI

When you use artificial intelligence to further your business goals, you can’t afford to be in the dark as far as your data is concerned. Accidentally using another organization’s intellectual property because proprietary information got mixed into a dataset could spell disaster for your company. And using restricted information – like your private user data – could be even worse, especially when you take the GDPR into account.

That means you’ll need to find ways to keep these contaminants out, while simultaneously keeping your own sensitive data securely inside your organization. Simply put: you need to know exactly where your information is coming from and where it’s going. If you want to design an AI that consumers will be able to trust, privacy and security need to be front and center in the design process.

It’s up to you to determine the way forward

Defining, designing and building responsible AI is one of the most important challenges that today’s businesses could possibly address. These are the systems that will shape our future. It is literally in everybody’s best interest that they are as impartial, as fair and as unbiased as possible.

Now, we’ve discussed the major challenges as far as trust is concerned. There are other issues as well, of course. The EU, for instance, is seriously lagging behind in AI investment. Where the Chinese government invests $50 million on average in each AI startup, the EU invests less than $3 million. 

But that’s not the message we should focus on. Nor, for that matter, should we focus on how you set up your AI development process. We could tell you to make sure that the design, development, and testing phases are kept separate. That you need ironclad governance for each, that you should consider how to keep things safe, transparent, reliable and fair before programming even one line of code, and that you need to think about a solid value system for your artificial intelligence. 

We could even remind you that we’ve developed the tools you need to gauge unconscious bias and instill fairness in your AI solutions. And to be fair, we just did. But that’s not the point.

Because ultimately, it’s up to you. How we deal with these ethical challenges, how we build AI that is both effective and responsible, that’s up to companies like yours. Your business is unique. The same goes for the AI you use. How will you make sure that there are proper rules to govern it? How will you make sure it’s fair? We’d love to hear your thoughts and we’re more than happy to share some of our own.