Transparency, Trust and AI



by Imam Hoque, Chief Product Officer at Quantexa

It has been estimated that artificial intelligence (AI) could contribute up to $15.7 trillion to the global economy by 2030. Technological evolution has allowed AI to take on increasingly complicated tasks, and in doing so it has fast become a part of our daily fabric.

Undoubtedly, AI helps us to work smarter and more efficiently, improving the quality of life for millions globally. However, it can raise other issues. As the technology becomes more sophisticated, it can become harder to explain resulting outcomes – a conundrum known as the black box problem. This makes it harder to trust the decisions from a black box solution, and the knock-on effect could be a hindrance of AI’s further development and uptake.

The answer to this is problem is explainable and transparent AI, but what does this look like and why is it so important?

Pitfalls of AI: The Black Box Problem

Trust is integral to all forms of software with a real-world application. Increasingly important decisions like criminal sentencing, credit applications and university admissions are all being determined in some part by automated software programs. As such, it’s important that we trust the software making the choices.

For a ‘traditional’ program that does not use AI, this trust stems from their design. Being guided by the deterministic rules of computer programming, most software is inherently predictable. As a result, any outcome at which it arrives will be entirely explainable, with this transparency protecting against unfair decision making.

Artificial intelligence programs – being far more complex in their design – do not all fit this pattern. Contemporary AI systems seek to replicate human-like behaviour through a process called ‘deep learning’. Here, the programs are fed large swathes of data to enable it to ‘learn’ what the appropriate action would be in a situation, like how humans are able to learn from prior experience.

For example, a hypothetical AI program designed to detect cases of money laundering will be fed ‘training data’ – which could be the historical transactional data of known financial criminals. This data is then used to create a model, which will then be used in future.

However, given that this procedure is completely automated, it’s incredibly difficult to determine what the program has used to create its model. Given the opacity of the decision-making process, it’s understandable why many may have caution when trusting the final outcomes.

This is important in the real world, for example, how could an underwriter explain why an insurance claim was rejected if they didn’t know themselves? Furthermore, if the data being used in the training stages is reflective of human biases, then there is a danger that the program will end up mirroring and reinforcing these discriminatory outcomes.


Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.


A program which is transparent in its processes is more trustworthy, and less likely to make adverse decisions, but how do we get there?

Using attention mechanisms

Attention mechanisms use visualisations to allow humans to see how a piece of software is interpreting an input. For example, if the input was an image of a tree, the program might highlight the leaves and trunk as the most important parts in determining that the picture was indeed of a tree.

This would allow a human analyst to provide an explanation for the computer programs final output, vastly improving the transparency of all software programs. By looking at the attention masks, not only can we protect against unfair AI models, it’s also possible to justify the thinking of the AI program, improving overall levels of trust.

Modification of model inputs

At the most fundamental level, the process of AI decision making can be seen to have three parts. An input, an algorithmic model and the resulting output. The input is fed into the machine, this is then referenced against the model and an output is produced.

This method essentially reverse engineers this process to infer how the software came to its decision. Specifically, we would modify the input, before exploring whether this changes the output.  To use the tree example again, if we fed a program the same image of the tree, but blocked out the leaves, then there would be a good chance that it would instead be recognised as a log.

If changing part of an image or blacking out a word changes the final output of a program, then it’s very likely that it is important in determining its decision. Again, this would mean that a human interpreter will be able to justify the actions of the program, providing trust.

Improving the quality of input data and testing

One of the most effective ways to improve trust will be to ensure that the data being fed into an AI program is high quality. This could mean that within certain parameters, that the dataset is as varied as possible.  For example, if we had a program designed to pick out suspicious financial activity from a bank’s internal data, it would be important to base the input data from several different compliance officers. If only a single perspective was used, then the machine would be reflective on that person’s subjective viewpoint, and there’s a chance that some criminals could slip through the machine’s grasps.

By auditing the data to ensure that datasets used are of high quality, we can help to protect against discriminatory outcomes, improving the trust in AI systems. Additionally, it is crucial to provide guidance to the AI learning process, similar to how parents set boundaries when teaching their children.

AI can analyse large portions of data, making accurate decisions at a rate unthinkable by a human. By improving the levels of transparency in the software, it will become easier to understand for businesses. When seeking out an AI solution, businesses should ensure that they seek out software that is ‘white box’ in nature.

By making sure that the software is designed with transparency in mind, they’ll be able to benefit from the speed, accuracy and recall of automated decision making, whilst being able to justify the resultant outcomes. Only by doing this will businesses be able to ensure that they can utilise AI-driven software to see the long-term benefits.  

Read the latest from the Data Economy Newsroom: