Dataset Restricted Access

Profiling Hate Speech Spreaders on Twitter


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>FRANCISCO RANGEL</dc:creator>
  <dc:creator>BERTa CHULVI</dc:creator>
  <dc:creator>GRETEL LIZ DE LA PEÑA</dc:creator>
  <dc:creator>ELISABETTA FERSINI</dc:creator>
  <dc:creator>PAOLO ROSSO</dc:creator>

Hate speech (HS) is commonly defined as any communication that disparages a person or a group on the basis of some characteristic such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics. Given the huge amount of user-generated contents on Twitter, the problem of detecting, and therefore possibly contrasting the HS diffusion, is becoming fundamental, for instance for fighting against misogyny and xenophobia. To this end, in this task, we aim at identifying possible hate speech spreaders on Twitter as a first step towards preventing hate speech from being propagated among online users.

After having addressed several aspects of author profiling in social media from 2013 to 2020 (fake news spreaders, bot detection, age and gender, also together with personality, gender and language variety, and gender from a multimodality perspective), this year we aim at investigating if it is possible to discriminate authors that have shared some hate speech in the past from those that, to the best of our knowledge, have never done it.

As in previous years, we propose the task from a multilingual perspective:


NOTE: Although we recommend participating in both languages (English and Spanish), it is possible to address the problem just for one language.


We are happy to announce that the best performing team at the 9th International Competition on Author Profiling will be awarded 300,- Euro sponsored by Symanto



The uncompressed dataset consists of a folder per language (en, es). Each folder contains:

	An XML file per author (Twitter user) with 100 tweets. The name of the XML file corresponding to the unique author id.
	A truth.txt file with the list of authors and the ground truth.

The format of the XML files is:

    &lt;author lang="en"&gt;
            &lt;document&gt;Tweet 1 textual contents&lt;/document&gt;
            &lt;document&gt;Tweet 2 textual contents&lt;/document&gt;

The format of the truth.txt file is as follows. The first column corresponds to the author id. The second column contains the truth label.



Your software must take as input the absolute path to an unpacked dataset, and has to output for each document of the dataset a corresponding XML file that looks like this:

    &lt;author id="author-id"

The naming of the output files is up to you. However, we recommend using the author-id as filename and "XML" as an extension.

IMPORTANT! Languages should not be mixed. A folder should be created for each language and place inside only the files with the prediction for this language.


The performance of your system will be ranked by accuracy. For each language, we will calculate individual accuracies in discriminating between the two classes. Finally, we will average the accuracy values per language to obtain the final ranking.

Related Work

	[1] Valerio Basile, Cristina Bosco, Elisabetta Fersini, Dora Nozza, Viviana Patti, Francisco Rangel, Paolo Rosso, Manuela Sanguinetti (2019). SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. Proc. SemEval 2019
	[2] Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, Viviana Patti (2020). Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources &amp; Evaluation.
	[3] Paula Fortuna, Sérgio Nunes (2018). A survey on automatic detection of hate speech in text. ACM Computing Surveys (CSUR) 51.4
	[4] Maria Anzovino, Elisabetta Fersini, Paolo Rosso (2018). Automatic Identification and Classification of Misogynistic Language on Twitter. In: Proc. 23rd Int. Conf. on Applications of Natural Language to Information Systems, NLDB-2018, Springer-Verlag, LNCS(10859), pp. 57-64
	[5] Elisabetta Fersini, Paolo Rosso, Maria Anzovino (2018). Overview of the task on automatic misogyny identification at IberEval 2018. Proc. IberEval 2018
	[6] Elisabetta Fersini, Dora Nozza, Paolo Rosso (2018). Overview of the Evalita 2018 task on automatic misogyny identification (AMI). Proc. EVALITA 2018
	[7] Cristina Bosco, Felice Dell'Orletta, Fabio Poletto, Manuela Sanguinetti, Maurizio Tesconi (2018). Overview of the EVALITA 2018 hate speech detection task. Proc. EVALITA 2018
	[8] Samuel Caetano da Silva, Thiago Castro Ferreira, Ricelli Moreira Silva Ramos, Ivandre Paraboni (2020). Data-driven and psycholinguistics motivated approaches to hate speech detection. Computación y Sistemas, 24(3): 1179–1188
	[9] Stiven Zimmerman, Udo Kruschwitz, Cris Fox (2018). Improving hate speech detection with deep learning ensembles. In Proc. of the Eleventh Int. Conf. on Language Resources and Evaluation (LREC 2018)
	[10] Francisco Rangel, Anastasia Giachanou, Bilal Ghanem, Paolo Rosso. Overview of the 8th Author Profiling Task at PAN 2020: Profiling Fake News Spreaders on Twitter. In: L. Cappellato, C. Eickhoff, N. Ferro, and A. Névéol (eds.) CLEF 2020 Labs and Workshops, Notebook Papers. CEUR Workshop, vol. 2696
	[11] Francisco Rangel and Paolo Rosso. Overview of the 7th Author Profiling Task at PAN 2019: Bots and Gender Profiling in Twitter. In: L. Cappellato, N. Ferro, D. E. Losada and H. Müller (eds.) CLEF 2019 Labs and Workshops, Notebook Papers. CEUR Workshop, vol. 2380
	[12] Francisco Rangel, Paolo Rosso, Martin Potthast, Benno Stein. Overview of the 6th author profiling task at pan 2018: multimodal gender identification in Twitter. In: CLEF 2018 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings., vol. 2125.
	[13] Francisco Rangel, Paolo Rosso, Martin Potthast, Benno Stein. Overview of the 5th Author Profiling Task at PAN 2017: Gender and Language Variety Identification in Twitter. In: Cappellato L., Ferro N., Goeuriot L, Mandl T. (Eds.) CLEF 2017 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings., vol. 1866.
	[14] Francisco Rangel, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Pottast, Benno Stein. Overview of the 4th Author Profiling Task at PAN 2016: Cross-Genre Evaluations. In: Balog K., Capellato L., Ferro N., Macdonald C. (Eds.) CLEF 2016 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings., vol. 1609, pp. 750-784
	[15] Francisco Rangel, Fabio Celli, Paolo Rosso, Martin Pottast, Benno Stein, Walter Daelemans. Overview of the 3rd Author Profiling Task at PAN 2015.In: Linda Cappelato and Nicola Ferro and Gareth Jones and Eric San Juan (Eds.): CLEF 2015 Labs and Workshops, Notebook Papers, 8-11 September, Toulouse, France. CEUR Workshop Proceedings. ISSN 1613-0073,,2015.
	[16] Francisco Rangel, Paolo Rosso, Irina Chugur, Martin Potthast, Martin Trenkmann, Benno Stein, Ben Verhoeven, Walter Daelemans. Overview of the 2nd Author Profiling Task at PAN 2014. In: Cappellato L., Ferro N., Halvey M., Kraaij W. (Eds.) CLEF 2014 Labs and Workshops, Notebook Papers., vol. 1180, pp. 898-827.
	[17] Francisco Rangel, Paolo Rosso, Moshe Koppel, Efstatios Stamatatos, Giacomo Inches. Overview of the Author Profiling Task at PAN 2013. In: Forner P., Navigli R., Tufis D. (Eds.)Notebook Papers of CLEF 2013 LABs and Workshops., vol. 1179
	[18] Francisco Rangel and Paolo Rosso On the Implications of the General Data Protection Regulation on the Organisation of Evaluation Tasks. In: Language and Law / Linguagem e Direito, Vol. 5(2), pp. 80-102
	[19] Francisco Rangel, Marc Franco-Salvador, Paolo Rosso A Low Dimensionality Representation for Language Variety Identification. In: Postproc. 17th Int. Conf. on Comput. Linguistics and Intelligent Text Processing, CICLing-2016, Springer-Verlag, Revised Selected Papers, Part II, LNCS(9624), pp. 156-169 (arXiv:1705.10754)
  <dc:subject>author profiling</dc:subject>
  <dc:subject>hate speech spreaders</dc:subject>
  <dc:subject>hate speech</dc:subject>
  <dc:title>Profiling Hate Speech Spreaders on Twitter</dc:title>
All versions This version
Views 2,219172
Downloads 20017
Data volume 539.2 MB34.7 MB
Unique views 1,577135
Unique downloads 1789


Cite as