Can AI ethics boards be successful?



by Patrick Smith, Field CTO for EMEA, Pure Storage

Artificial Intelligence (AI) has the potential to deliver widespread societal impact, from finding cures for diseases, predicting crop shortages and improving business productivity, the possibilities are endless.

However, technology with such huge potential is never without risk, and the rewards and positive change that AI could deliver will only be possible if ethics is at the forefront of any programme or application.

Biased data, deep fake videos, and electoral misuse are just some of the ways most of us will have heard about AI being used with malicious intent. These examples do nothing to alleviate public skepticism, and this undoubtedly affects businesses in the long term. As well as these obvious ways of misusing AI, there are other risks when it comes to the day to day management of any AI platform or programme, including:

·       Transparency: Or lack of, around what goes into AI algorithms

·       Accountability: Who is responsible if AI makes a mistake?

·       Discrimination: Could AI unfairly discriminate due to biased data?

·       Data privacy: For AI to develop, the volume of data needed will only increase. This calls for greater caution and awareness on how people’s data is collected, stored and used, but will this limit capabilities?

Could AI ethics boards be the answer?

Many tech giants have experimented with the assembly of AI ethics boards, but throughout 2019 there are notable failures – including the closure of Google’s AI ethics board just two weeks after it was assembled.

In the same way that AI can only produce results as good as the data that feeds it, Google faced a public backlash regarding its chosen membership of the council.

This led many to question whether AI ethics boards are truly the way to responsible use of AI, after all, if a global tech giant can’t manage to get its ethics board off the ground, shouldn’t the rest of the industry take note?


Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.


If nothing else, there were many learnings from the Google AI ethics board closure; notably that companies must ensure that their ethics panels truly reflect a diverse society and are wholly transparent and impartial.

In contrast, there are some notable successes. Organisations and assemblies which are introducing actionable guidelines and forming ethics boards that follow a well-defined set of principles to help establish ethical AI solutions.

The European Union is one example of a public body tackling AI ethics with its High-Level Expert Group on Artificial Intelligence. Equally in the US, MIT launched its new Schwartzman College of Computing, a $1 billion endeavor to create a central hub of AI research.

Although these initiatives are in their infancy, their intentions and approach appear to be on the right path.

Retrospective responsibility

As new laws and regulations come into effect, it is likely we will see organisations under a microscopic lens for actions taken in the past that could today be deemed as unethical.

As such, companies are more aware of today’s actions, and how they could be perceived in future – leading to two outcomes:

      I.         Companies taking no action for fear of getting it wrong

     II.         Companies rushing to take action so they can say “at least we tried”

If Google’s case is anything to go by, it’s important to lay the right foundations before pursuing big ambitions.

Over-promising

Microsoft’s mantra is a prime example of companies publicly declaring their efforts: “Our aim is to facilitate computational techniques that are both innovative and ethical while drawing on the deeper context surrounding these issues from sociology, history and science and technology studies”. Is it possible for any company to make such promises? We will have to wait and see.

A societal, not technology problem

Even if we overcome some of the above issues facing companies in their quest for ethical AI, will we see a utopian state of artificial intelligence? It’s highly unlikely.


Join the Debate

Time is precious, but news has no time. Join Data Economy’s LinkedIn debate page today and get access to content in real-time.


A strong argument has been made in Pure Storage’s AI ethics panel by Garry Kasparov — a strong advocate for human rights in the technological age.

He believes that bias is a reflection of societal problems that vastly pre-dates AI. He says “AI looks at the proportions in society and it will build patterns & algorithms based on that which are not politically correct. If you don’t like what you see as an outcome it’s because of problems that are deeply built into society.”

Much like climate change, if ignored, we may only realise the impact of unethical AI at a point where there is little or no ability to influence it.

Companies, governments and individuals must see AI ethics as duty, and create a set of principles and best practices to protect our society.

There is no hard and fast approach to getting this right, but transparency and a strong moral compass are a good starting point, and a willingness to try different approaches; accepting that they might not always work.

There has never been a more exciting time to be working in the field of AI and data, but as with any emerging and disruptive technology, there’s a lot of work to be done to ensure we are all using it responsibly.

A safe and ethical future requires collaboration across industries, experts, and an absolute focus on fairness, inclusion, privacy and human rights. AI’s potential is vast, but its responsible application is very much on the industry’s shoulders.

Read the latest from the Data Economy Newsroom: