This is a past event
Internal Event
As part of our Interdisciplinary Human-Centred AI network and our Turing University Network activities, the Interdisciplinary Human-Centred AI network invites you to participate in a meeting focusing on Human-Centred AI research and its implications across disciplines (including Engineering, Psychology, Law, Health Sciences, Language, Literature, Music, Computing Sciences, Business, Visual Culture and Philosophy).
The focus of this meeting will be on Humans and Large Language Models, specifically understanding Language Model capabilities (and lack thereof) in capturing multiple languages, creative language use in minority languages, and human language patterns of comprehension. Identifying and discussing LM use in these contexts has implications for how they can be used and for how their widespread use will affect individuals and societies. We warmly invite staff and students to come with interdisciplinary questions and look forward to an insightful discussion. Tea and coffee will be provided.
Talks and Speakers:
- 'LLMs and low-resource languages' - Professor Eneko Agirre, Professor of Informatics and Head of HiTZ Basque Center of Language Technology at the University of the Basque Country, UPV/EHU, in San Sebastian, Spain
- ‘Scots… ish? Generating new creative content in Scots using ChatGPT 4.0’ - Dr Dawn Leslie - Lecturer in Scottish Language & Linguistics, University of Aberdeen
- ‘Are Language Models good models of human linguistic behaviour? An example from Structural Priming’ - Dr Arabella Sinclair - Lecturer in Computing Science, University of Aberdeen
Professor Eneko Agirre, Professor of Informatics and Head of HiTZ Basque Center of Language Technology at the University of the Basque Country, UPV/EHU, in San Sebastian, Spain
'LLMs and low-resource languages'
Generative AI models are now multilingual, raising new questions about their relative performance across languages and local cultures, specially for communities with less speakers. In this talk I will explore some of those questions and the lessons we learned along the process. Is it possible to build high-performing LLMs for low-resource languages? We have built a high performing open model for Basque accompanied by a fully reproducible end-to-end evaluation suite. Do LLMs think better in English than the local language? Our experiments show that LLMs do not fully exploit their multilingual potential when prompted in non-English languages. Do LLMs know about local culture? We probed the complex interaction between language and global/local knowledge, showing for the first time that local knowledge is transferred from the low-resource to the high-resource language, a sign that prior findings may not hold when evaluated on local topics. The evaluation suite was recognised with a best resource paper award at ACL 2024.
Biography: Eneko Agirre is Full Professor of Informatics and Head of HiTZ Basque Center of Language Technology at the University of the Basque Country, UPV/EHU, in San Sebastian, Spain. Visiting researcher or professor at New Mexico State, Melbourne, Southern California, Stanford and New York Universities. He has been active in Natural Language Processing and Computational Linguistics since his undergraduate days. He received the Spanish Informatics Research Award in 2021, and is one of the 74 fellows of the Association of Computational Linguistics (ACL). He was President of ACL's SIGLEX, member of the editorial board of Computational Linguistics, Journal of Artificial Intelligence Research and Action Editor for the Transactions of the ACL. He is co-founder of the Joint Conference on Lexical and Computational Semantics (*SEM). He is a recipient of three Google Research Awards and six best paper awards and nominations, most recent at ACL 2024. Dissertations under his supervision received best PhD awards by EurAI, the Spanish NLP society and the Spanish Informatics Scientific Association. He has over 200 publications across a wide range of NLP and AI topics, as well as having given more than 20 invited talks, mostly international.
Dr Dawn Leslie - Lecturer in Scottish Language & Linguistics, University of Aberdeen
‘Scots… ish? Generating new creative content in Scots using ChatGPT 4.0’
In this talk I will present the preliminary findings of a research project studying the accuracy and authenticity of ChatGPT 4.0-generated Scots creative content. Scots is a minority language spoken in Scotland and Ulster which has close linguistic kinship with English and is often mischaracterised as a ‘dialect’ of its linguistic sibling. This study compares a corpus of ‘real life’ North-East Scots (Doric) poetry with a corpus of generative AI poetry created using ChatGPT 4.0. Though analysing the linguistic features present in each text, it can be observed that the ChatGPT 4.0-generated Scots language content is highly anglicised in comparison to authentic works from contemporary North-East Scots poets. This includes anglicised representations of words, a general lack of lexical diversity, and the presence of non-contemporary orthographic features. I will present these over the course of the talk, as well as implications explored for Scots as a ‘low resource’ language in this age of technological advances.
Biography: Dr Dawn Leslie is a Lecturer in Scottish Language & Linguistics at the University of Aberdeen. Her research focuses mainly on the Scots language, exploring attitudes towards language variation and change, as well as investigating broader minority language issues surrounding the protection and promotion of marginalised varieties.
Dr Arabella Sinclair - Lecturer in Computing Science, University of Aberdeen
‘Are Language Models good models of human linguistic behaviour? An example from Structural Priming’
Language Models (LMs), like humans, are exposed to examples of language, and learn to comprehend and produce language via these examples. Under certain assumptions about the context within which these examples are processed (e.g. LMs only observe written or transcribed language, no additional modalities, or differentiation across language producers), LMs can serve as cognitive models of human language processing, able to predict comprehension and production behaviour. Structural Priming is one such paradigm to evaluate comprehension and production behaviour in humans, whereby listeners are likely to more readily comprehend a target sentence after being recently exposed to a prime sentence of the same structure. This talk will present work investigating structural priming in LMs and compare LM and human behaviour when primed. I will firstly present work which finds evidence of structural priming behaviour in a large suite of LMs (Sinclair et al 2022). In follow up work we explore what factors predict their priming behaviour and whether these factors are similar to those predicting human priming (Jumelet et al 2024). I will end with discussing some recent work which directly compares LM and human responses to the same stimuli. These findings have implications for understanding the in-context-learning capabilities of LMs, the extent to which LMs can serve as models of comprehension or production, as well as highlighting macro-vs micro level behaviour differences between LM and Human responses when presented with the same stimuli.
Biography: Arabella Sinclair is a Lecturer in the department of Computing Science at the University of Aberdeen. Her research centres around understanding and modelling human linguistic behaviour, and falls at the intersection of AI, NLP, Computational Linguistics and Cognitive Science. Particular topics of interest are Dialogue Modelling, Education, and Creativity. Her research has involved analysing patterns of human behaviour using computational techniques since her undergraduate thesis at the University of Aberdeen and specialised in Natural Language Processing and AI since her MPhil in the Department of Computer Science and Technology at the University of Cambridge. She spent time at the University of Edinburgh where she received her PhD in Computational Linguistics from the School of Informatics, and at the University of Amsterdam, where she worked as a postdoctoral researcher as part of the Dialogue Modelling Group, in the Institute for Logic, Language and Computation.
About the Network: The interdisciplinary Human-Centred AI (HCAI) network involves a wide range of colleagues from across the University of Aberdeen who have an interest in the intersection of AI technologies and the role played by humans in its development, as decision-makers, end-users, affected parties, collaborators, and designers. The network considers aspects related to linguistics, psychology, human creativity and culture, and policy, bias/discrimination/ramifications of generative AI, social and legal implications, philosophical elements of AI, AI-to-AI interactions, and more. The network’s aim is to enhance interdisciplinarity in the above areas and help to develop interdisciplinary projects and funding proposals, supporting engagement activities that will enhance the external profile of the University.
- Hosted by
- Interdisciplinary Human-Centred AI Network
- Venue
- New King's 1 (NK1)
- Contact
-
RSVP: If you are interested in attending please email interdisciplinary@abdn.ac.uk.