CS Seminar: Evaluating Natural Language Generation

CS Seminar: Evaluating Natural Language Generation

This is a past event

A key challenge in Natural Language Generation (NLG) is evaluating the quality of texts produced by an NLG system.   Robust and reliable evaluation is essential to determining whether a new NLG model or system advances state-of-the-art, understanding weaknesses of current NLG systems, and giving users a good understanding of what they can expect from an NLG system.

In this talk, I will summarise research in Aberdeen on evaluating NLG, focusing on

  • Techniques for evaluating the accuracy of generated texts
  • Techniques for evaluating the usefulness of generated texts in real-world settings
  • Our new EPSRC project (not yet started) on making evaluations more reproducible
Ehud Reiter
Meston 4 (physical) and MS Teams (virtual)

Please contact Ehud Reiter for more information