Research PG
- About
-
- Email Address
- a.gado.25@abdn.ac.uk
- Office Address
- School/Department
- School of Natural and Computing Sciences
Biography
Arpita is a PhD student at the University of Aberdeen's Interdisciplinary Institute. Her academic journey began with a B.E. in Computer Science and Engineering. During her undergraduate studies, she grew interested in Data Science & Machine Learning and leveraging these technologies for the idea of Technology for Good. This interest led her to pursue an MSc in Data Science at King's College London, where her work included developing predictive models using real-world datasets such as the Santander bike-share system.
For her current doctoral research, she's exploring the unique, long-term impacts of Artificial Intelligence (AI) in healthcare. She's particularly interested in designing and improving provenance-based systems to ensure human-in-the-loop (HITL) systems remain effective. This focus addresses a key challenge: preventing automation bias and skill erosion among clinicians who interact with AI, making sure the "human" in HITL systems remains capable and engaged. Her work aims to keep these intelligent systems safe, responsible, and truly beneficial within complex healthcare settings.
Outside her studies, Arpita enjoys volunteering with NGOs. She teaches computer science to children from underprivileged communities and works to raise awareness about the importance of education in these areas.
Qualifications
- Msc Data Science2024 - King's College London
- B.E. Computer Science and Engineering2023 - Visvesvaraya Technological University
- Research
-
Research Overview
My PhD research sits at the intersection of technology, law, and healthcare, exploring the socio-technical, long-term impacts of Artificial Intelligence (AI) within healthcare systems. A central focus of my work is understanding how prolonged AI use affects clinicians and their skills.
Laws and regulations increasingly emphasize the need for human oversight where AI is deployed in healthcare. However, challenges like automation bias and potential skill erosion among clinicians can jeopardize the effectiveness of these "human-in-the-loop" systems. If clinicians rely too heavily on AI, their critical judgment might diminish, risking the very oversight meant to ensure patient safety.
My aim is to design and evaluate monitoring systems that can detect early signs of skill erosion or automation bias. By identifying these subtle changes, my research seeks to enable timely interventions, ensuring that human oversight remains effective and AI integration truly benefits healthcare without unintended consequences.