OffensEval 2023 : Offensive language identification in the age of Large Language Models

Zampieri, Marcos and Rosenthal, Sara and Nakov, Preslav and Dmonte, Alphaeus and Ranasinghe, Tharindu (2023) OffensEval 2023 : Offensive language identification in the age of Large Language Models. Natural Language Engineering, 29 (6). pp. 1416-1435.

Full text not available from this repository.


The OffensEval shared tasks organized as part of SemEval-2019–2020 were very popular, attracting over 1300 participating teams. The two editions of the shared task helped advance the state of the art in offensive language identification by providing the community with benchmark datasets in Arabic, Danish, English, Greek, and Turkish. The datasets were annotated using the OLID hierarchical taxonomy, which since then has become the de facto standard in general offensive language identification research and was widely used beyond OffensEval. We present a survey of OffensEval and related competitions, and we discuss the main lessons learned. We further evaluate the performance of Large Language Models (LLMs), which have recently revolutionalized the field of Natural Language Processing. We use zero-shot prompting with six popular LLMs and zero-shot learning with two task-specific fine-tuned BERT models, and we compare the results against those of the top-performing teams at the OffensEval competitions. Our results show that while some LMMs such as Flan-T5 achieve competitive performance, in general LLMs lag behind the best OffensEval systems.

Item Type:
Journal Article
Journal or Publication Title:
Natural Language Engineering
ID Code:
Deposited By:
Deposited On:
18 Jun 2024 10:40
Last Modified:
16 Jul 2024 01:19