Classifying Multilingual User Feedback using Traditional Machine Learning and Deep Learning

1. Does the paper propose a new opinion mining approach?

No

2. Which opinion mining techniques are used (list all of them, clearly stating their name/reference)?

SentiStrength

3. Which opinion mining approaches in the paper are publicly available? Write down their name and links. If no approach is publicly available, leave it blank or None.

SentiStrength: http://sentistrength.wlv.ac.uk/

4. What is the main goal of the whole study?

Classify user feedback into three distinct categories: Problem reports, inquiries, and irrelevant.

5. What the researchers want to achieve by applying the technique(s) (e.g., calculate the sentiment polarity of app reviews)?

Provided sentiment of a document as a training feature for the model they are training.

6. Which dataset(s) the technique is applied on?

A corpus of 25,000 tweets, 10,000 English, 15,000 Italian, sampled from tweets sent at support accounts of telecommunication companies. Dataset is labeled with the help of a microtasks platform, figure eight.

7. Is/Are the dataset(s) publicly available online? If yes, please indicate their name and links.

https://mast.informatik.uni-hamburg.de/replication-packages/ data is available upon request.

8. Is the application context (dataset or application domain) different from that for which the technique was originally designed?

Yes, SentiStrength was not designed for Twitter, authors do not mention whether they use domain adaptation for SentiStrength.

9. Is the performance (precision, recall, run-time, etc.) of the technique verified? If yes, how did they verify it and what are the results?

No, only the performance of the entire pipeline is discussed.

10. Does the paper replicate the results of previous work? If yes, leave a summary of the findings (confirm/partially confirms/contradicts).

No.

11. What success metrics are used?

-

12. Write down any other comments/notes here.

-