Does your LLM know what it is talking about?
This task will test and verify that a system based on a generative language model is able to handle materia from some given topical domain of interest, by having systems automatically generate tests of domain knowledge.
Is it true? Or make-believe?
This task will test how the truthfulness or veracity of automatically generated text can be assessed.
Will it respond with the same content to all of us?
This task will test the capability of a model to handle input variation -- e.g. dialectal, sociolectal, and cross-cultural -- as represented by human-generated varieties of input prompts. The results will be assessed by how variation in output is conditioned on variation of equivalent but non-identical input prompts.
Has a machine written this? Or has a human author put together these words?
This task will explore whether automatically-generated text can be distinguished from human-authored text. This task will be organised in collaboration with the PAN lab at CLEF.
The first ELOQUENT Workshop will be in Grenoble, September 9-12 2024.
The workshop program will hold overview presentations, an invited keynote, and some selected participant presentations.
Here is how to participate in the discussion about task details: sign up to join the conversation through the CLEF registration form.
- Fall 2023: discussion and task formulation
- February 2024: tasks open and public announcement of tasks on mailing lists
- Last week of March: ECIR presentation of ELOQUENT
- 22 April 2024: registration for participation closes
- May 2024: submission deadline of experimental runs from participants
- June 2024: participant report submission deadline
- July 2024: camera ready report submission deadline
- September 2024: workshop at CLEF