Have you ever bought something from Amazon? Used Google maps to get to your favourite restaurant? Watched a movie Netflix recommended? If so, you have interacted with Artificial Intelligence (AI). Yes, that’s right. AI isn’t just robots and lousy chatbots.
Simply put, AI is a ‘technology that exhibits human-like intelligence’. Humans can see, hear, read, learn, and solve problems among many other things. All of these abilities are part of our human intelligence. When a computer mimics these things, it is being artificially intelligent.
What is often lost though is the fact that just because it is artificial, it is not necessarily better - just that it is not natural or human.
Just like humans though, AI is not perfect. Have a think back to the Netflix recommendations mentioned earlier. There are undoubtedly a few movies in there that you wouldn’t waste your time or your popcorn on. However, AI is becoming more and more intertwined in our lives and in the technology we use and create. Given that it can be wrong (just like a human) – how do we know when to trust it?
Can we trust AI to get it right when it really matters? For example, can we trust it to make decisions that could impact the health services available to us?
Human Behaviour-Change Project
As part of our work on the Human Behaviour-Change Project (HBCP), we are looking at ways that AI can help change more than just your movie nights.
We have been using AI to help understand behaviour change research. The HBCP is a Wellcome Trust-funded collaboration between University College London, the University of Aberdeen, the University of Cambridge and IBM, led in Aberdeen by Emeritus Professor of Health Psychology, Marie Johnston.
Not to downplay the importance of film recommendations, but we are interested in how AI can be used to help us better understand issues around behaviour change, like how best to encourage someone to stop smoking. We want to know more about what the evidence suggests will work for some individuals, and why.
Currently there is so much research out there, that it would be nearly impossible to read all the evidence and keep up with new research as it happens. That’s where AI can help. Computers are a lot quicker than us at ‘reading’, so we have been training them how to read academic research papers in the same way we do. This will help us spend less time and money on research that has been done before and will also help us evaluate and understand the existing research better and more quickly.
How can we trust AI?
Alongside developing this AI system, we recognise that it is important to explore how we can be critical of AI. To help us understand what it means to trust AI, we worked with members of the public to develop a toolkit which can be used to critically question the ways AI might be used in public health. If we are making a system that can influence decisions made about our health – it’s important that we have the tools to know if we can trust it.
Over the course of six workshops, we explored what influences individuals’ trust in AI. One common theme is that AI can be really complicated which makes it hard to know where to start when trying to learn more.
Over a slice of pizza one night, we came up with a slightly ‘cheesy’ analogy…..
Understanding AI (using pizza)
Before we could talk to members of the public about AI, we had to make sure that we are all on the same page and have a basic understanding of what AI is. This is when we realised the best way to explain how AI works might be to talk about it in terms of pizza - after all who doesn’t like or know all about pizza?
In order to know what pizza we like, we need to try lots of different pizzas with lots of different types of toppings and crusts. Once we have eaten enough pizza, we might have an idea of what we like.
Let’s say we like a thin crust, tomatoes, mushrooms and lots of cheese. Based on that we can make a prediction of whether we are going to like a new pizza. We might say: “If a pizza has lots of cheese, tomatoes and mushrooms on it, then I will like it.” However, when our new pizza arrives with all our favourite ingredients, it also has pineapple on it! We try it and it turns out we don’t like pineapple on our pizza. Therefore, we need to update our idea of what we like: I like pizza with thin crust, tomatoes, mushrooms and lots of cheese and NO pineapple. So, in the future, when there’s pineapple, we know that we don’t like that pizza. We repeat that process with other toppings and with each experience we learn and can update our predictions.
How does that apply to AI? Well, it’s almost the same. But instead of looking at toppings, machines look at images, texts or sounds. Collectively, this is known as data. When they get new data (like if a pizza arrives with pineapple on it), they can update their idea and predictions about them.
The group realised that there is another similarity between pizza and AI: To make it good, it needs to be made from quality ingredients. Ingredients for AI are data. They can be too old, incomplete (pizza without cheese), or the wrong type (like a pizza made only with vegetables…).
In the real world, issues with the data that AI is trained on can lead to it being biased against certain groups of people. This is something that we as a group discussed and found concerning.
What to do next?
After understanding what AI actually is and how it can go wrong, we considered the different types of questions we might want to ask about a system that uses AI. We wanted to know what the equivalent is of asking where the ingredients of our pizza came from. Together with a graphic designer, we turned all the concerns we discussed into a toolkit of resources.
The aim is that this toolkit can be used to support other people in their understanding of AI. Understanding AI even just a little better might help us better assess when (and when not) to trust it. As well as a ‘roadmap’ that outlines the steps you might want to take to understand an AI system, the toolkit provides a series of questions about the different things you might want to know about the data that was used to train the system. These resources can help us decide if we want other people to use an AI system to make decisions that might affect us. They are tools to help decide if we want to trust it.
Just because AI is complicated, this doesn’t mean we shouldn’t try to understand it. These resources are an important step in providing us with the tools to be critical. It also helps that we can discuss our favourite pizza along the way!
About the authors:
Professor Marie Johnston, Emeritus Professor at the University of Aberdeen and a founding figure in UK Health Psychology.
Eva Jermutus works on the Human Behaviour-Change Project at UCL as a PhD student and investigates trust in healthcare AI.
Ella Howes works on the Human Behaviour-Change Project at UCL as a research assistant in translational behaviour science.