Conference paper Open Access
Miok, Kristian; Škrlj, Blaž; Zaharie, Daniela; Robnik-Šikonja, Marko
Hate speech is an important problem in the management of user-generated content. In order to remove offensive content or ban misbehaving users, content moderators need reliable hate speech detectors. Recently, deep neural networks based on transformer architecture, such as (multilingual) BERT model, achieve superior performance in many natural language classification tasks, including hate speech detection. So far, these methods have not been able to quantify their output in terms of reliability. We propose a Bayesian method using Monte Carlo Dropout within the attention layers of the transformer models to provide well-calibrated reliability estimates. We evaluate the introduced approach on hate speech detection problems in several languages. Our approach not only improves the classification performance of the state-of-the-art multilingual BERT model but the computed reliability scores also significantly reduce the workload in inspection of offending cases and in reannotation campaigns.
Name | Size | |
---|---|---|
ICML_UDL_2020.pdf
md5:a4e514858849c5cfdde83e31b365c240 |
592.5 kB | Download |
All versions | This version | |
---|---|---|
Views | 183 | 165 |
Downloads | 145 | 119 |
Data volume | 85.6 MB | 70.5 MB |
Unique views | 163 | 152 |
Unique downloads | 133 | 110 |