fBERT: A Neural Transformer for Identifying Offensive Content

Sarkar, Diptanu and Zampieri, Marcos and Ranasinghe, Tharindu and Ororbia, Alex (2021) fBERT: A Neural Transformer for Identifying Offensive Content. In: Findings of the Association for Computational Linguistics: EMNLP 2021 :. Association for Computational Linguistics, DOM, pp. 1792-1798. ISBN 9781955917100

Full text not available from this repository.

Abstract

Transformer-based models such as BERT, XLNET, and XLM-R have achieved state-of-the-art performance across various NLP tasks including the identification of offensive language and hate speech, an important problem in social media. In this paper, we present fBERT, a BERT model retrained on SOLID, the largest English offensive language identification corpus available with over 1.4 million offensive instances. We evaluate fBERT’s performance on identifying offensive content on multiple English datasets and we test several thresholds for selecting instances from SOLID. The fBERT model will be made freely available to the community.

Item Type:
Contribution in Book/Report/Proceedings
ID Code:
221495
Deposited By:
Deposited On:
11 Nov 2024 16:35
Refereed?:
Yes
Published?:
Published
Last Modified:
11 Nov 2024 16:35