Safety in conversational AI: Meaning is massively influenced by context

Safety in conversational AI: Meaning is massively influenced by context
-

This is a past event

While the NLP community has traditionally explored the ethical issues of text-based models (such as hate speech detection, inherent biases of the system etc), real-world conversations and dialogues differ significantly from structured, written text documents, and this brings with it its own unique set of safety challenges. From an understanding perspective, I will present research on how robust such models are to input transcripts arising from dialogues, given that they are pre-trained on massive amounts of written text. I will also present work on contexts where models must be robust to variability, and what steps can be taken to ensure such guarantees. Additionally, in real-world interactions, unique human-like ways to communicate may be co-opted by designers of these systems to drive up user engagement. From a generation perspective, I will present research on anthromorphism, i.e. the implications of encouraging humans to relate to such systems in human-like ways.

Speaker
Tanvi Dinkar
Venue
Meston G05