Intention Progression in Multi-Agent Settings

Intention Progression in Multi-Agent Settings

This is a past event

A key problem for rational agents is 'what do do next': which goal the agent should be trying to achieve, and which means it should use to achieve it. In the Belief-Desire-Intention approach to agents, this is termed the 'intention progression problem', and has largely been studied in a single agent setting. In this talk, I will briefly present some recent work on techniques for progressing the intentions of agents in a multi-agent setting, where each agent is aware (or partially aware) of the intentions of other agents. The approach uses online learning  (Monte Carlo Tree Search) to infer the likely actions of other agents, and can be applied in cooperative, neutral (selfish) and adversarial settings.

Prof Brian Logan
Meston 2 and MS Teams

Contact Ehud Reiter ( for more information