This is a past event
Research in multi-agent systems has a long history of borrowing social concepts from human society and developing computational counterparts, with the aim of promoting orderly social interactions in societies of autonomous agents. As human communication and coordination becomes increasingly mediated by software, this raises the question of whether we can apply these techniques to create socially-aware agents that help us to maintain awareness of our social state and advise us on the social consequences of our actions - both actual and hypothetical.
Furthermore, in the last few years it has been observed that large language models (LLMs) can be used to generated socially realistic human behaviour. This has motivated me to explore how well LLMs can reason about norms and trust using representations in natural language - with its concomitant advantages in generality and convenience for human communication but the potential for ambiguity. I will present some preliminary results in LLM-based reasoning about social norms and trust.
- Speaker
- Stephen Cranefield
- Venue
- Meston G05