The purpose of this tool is performing NLP analysis on Telegram chats. Telegram chats can be exported as .json files from the official client, Telegram Desktop (v. 188.8.131.52).
The files are parsed, the content is used to populate a message dataframe, which is then anonymized.
The software calculates and displays the following information:
- user count (n of users, new users per day, removed users per day);
- message count (n and relative frequency of messages, messages per day);
- autocoded messages (anonymized message dataframe with code weights assigned to each message based on a customizable set of regex rules);
- prevalence of codes (n and relative frequency);
- prevalence of lemmas (n and relative frequency);
- prevalence of lemmas segmented by autocode (n and relative frequency);
- mean sentiment per day;
- mean sentiment segmented by autocode.
The software outputs:
- messages_df_anon.csv - an anonymized file containing the progressive id of the message, the date, the univocal pseudonym of the sender, and the text;
- usercount_df.csv - user count dataframe;
- user_activity_df.csv - user activity dataframe;
- messagecount_df.csv - message count dataframe;
- messages_df_anon_coded.csv - an anonymized file containing the progressive id of the message, the date, the univocal pseudonym of the sender, the text, the codes, and the sentiment;
- autocode_freq_df.csv - general prevalence of codes;
- lemma_df.csv - lemma frequency;
- autocode_freq_df_[rule_name].csv - lemma frequency in coded messages, one file per rule;
- daily_sentiment_df.csv - daily sentiment;
- sentiment_by_code_df.csv - sentiment segmented by code;
- messages_anon.txt - anonymized text file generated from the message data frame, for easy import in other software for text mining or qualitative analysis;
- messages_anon_MaxQDA.txt - anonymized text file generated from the message data frame, formatted specifically for MaxQDA (to track speakers and codes).
- pandas (1.2.1)
- tqdm (4.62.2)
- datetime (4.3)
- matplotlib (3.4.3)
- Spacy (3.1.2) + it_core_news_md
- wordcloud (1.8.1)
- feel_it (1.0.3)
- torch (1.9.0)
- numpy (1.21.1)
- transformers (4.3.3)
This code is optimized for Italian.
Lemma analysis is based on spaCy, which provides several other models for other languages ( https://spacy.io/models ) so it can easily be adapted.
Sentiment analysis is performed using FEEL-IT: Emotion and Sentiment Classification for the Italian Language (Kudos to Federico Bianchi <email@example.com>; Debora Nozza <firstname.lastname@example.org>; and Dirk Hovy <email@example.com>). Their work is specific for Italian. To perform sentiment analysis in other languages one could consider nltk.sentiment
The code is structured in a Jupyter-lab notebook, heavily commented for future reference.