News Details

Universities receive £1.1m for AI accountability project

The Universities of Aberdeen, Oxford and Cambridge have been awarded £1.1 million to develop auditing systems akin to 'black box' flight recorders for artificial intelligence (AI) systems.

The award, from the Engineering and Physical Sciences Research Council (EPSRC), will fund the Realising Accountable Intelligent Systems (RAInS) project, a research collaboration between the three universities.  

Working with the public, the legal profession and technology companies, the project will aim to develop prototype solutions to allow developers to provide secure, tamper-proof records of intelligent systems’ characteristics and behaviours.

These records can be shared with relevant authorities and further analysed in the event of incidents or complaints, ensuring that future AI systems that are transparent and accountable.

Professor Pete Edwards, from the University of Aberdeen, leads the multi-disciplinary team working on the project, along with Professor Rebecca Williams from the University of Oxford and Dr Jat Singh from the University of Cambridge.

Professor Edwards said: “AI technologies are being utilised in more and more scenarios including autonomous vehicles, smart home appliances, public services, retail and manufacturing. But what happens when such systems fail, as in the case of recent high-profile accidents involving autonomous vehicles?

Ultimately our ambition is to create a means by which the developer of an intelligent system can provide a secure, tamper-proof record of the system's characteristics and behaviours that can be shared...." Professor Pete Edwards

“How can we hold systems and developers to account if they are found to be making biased or unfair decisions?  These are all real and timely challenges, given that AIs will increasingly affect many aspects of everyday life.”

The RAInS project aims to develop solutions that will support auditing of AI systems, ensuring a level of accountability.

Dr Singh said: “Our work will increase the transparency of AI systems not only after the fact, but also in a manner which allows for early interrogation and audit, which in turn may help prevent or mitigate harm.”

Professor Williams added: “From a legal perspective the transparency and accountability of these systems is vital and is inherent in any concept we might have of fairness.  The law can only regulate and control what it can see.

“Ultimately our ambition is to create a means by which the developer of an intelligent system can provide a secure, tamper-proof record of the system's characteristics and behaviours that can be shared - under controlled circumstances - with relevant authorities in the event of an incident or complaint.”

Author: Wendy Davidson


In This Section

Search News

Browse by Month

2019

2019

2018

2017

2016

2015

2014

2013

2012

2011