Black Box Artificial Intelligence and the Right to a Fair Trial: Challenges and Solution

Black Box Artificial Intelligence and the Right to a Fair Trial: Challenges and Solution
-

This is a past event

Artificial intelligence (AI) brings great benefits to all sectors of society and strengthens progress, social well-being and economic competitiveness via automatisation of the respective human activities. At the same time, however, it poses risks to a variety of human rights and fundamental freedoms, be it due to the intrinsic technological processes, human input or its abusive or malicious use in practice. An erroneous facial recognition mechanism at the airport might lead to a wrongful detention impacting on the right to liberty; a biased AI in content moderation might interfere with freedom of expression; an Internet of things appliance such as voice-controlled home shopping devices might be hacked and serve as an eavesdropping device violating the right to privacy. One of the basic human rights, which is at stake notably by the black box AI across various technologies, is the right to a fair trial. Non-transparent and unexplainable AI models are not capable to provide information the AI decision has been based upon and to clarify the algorithms used. This impacts on the right of the potential victims to defend themselves against the AI decision and to contest all the arguments and evidence presented by the other party. For example, if a predictive justice system estimates a high risk of recidivism of a defendant, the denial of such evidence thwarts the effective and efficient exercise of the individual’s right to a fair trial. Similarly, if a biased AI-based financial service denies a loan to a claimant, the absence of access to the information supporting such a decision undermines the person’s possible defence.

Is the ban on black box AI systems the solution? How to ensure the access to the information supporting the AI decision? Are all black box AI systems incompatible with human rights?

The research seminar will address those questions and open a discussion.

***

Dr. Martina Šmuclerová is a practitioner and academic in Public International Law with expertise in international law and new technologies (space law, artificial intelligence, cyber law), law of international organizations, law of international security, and human rights.

She was granted her PhD and MA in Public International Law from the Sorbonne University, Paris, France. She has been a Senior Lecturer at Institut d’études politiques de Paris (Sciences Po) since 2011 where she teaches different courses in Public International Law. She is also a Research Fellow in Public International Law at Ambis University in Prague where she leads the interdisciplinary grant research project “Artificial Intelligence and Human Rights: Risks, Opportunities and Regulation”. In 2012-2019, Dr. Šmuclerová served as International Law adviser and diplomat at the Ministry of Foreign Affairs of the Czech Republic and Representative to the United Nations, EU and other international fora. She initiated and led international law projects such as the UN Space Debris Compendium or the UN HRC Universal Periodic Review best practice exchange. She holds French Government Scholarships and other awards.

Speaker
Martina Šmuclerová
Hosted by
School of Law
Venue
Hybrid Event.
Contact

For online access please email georgi.chichkov@abdn.ac.uk