Sentiment Analysis in monitoring software development processes: An exploratory case study on GitHub's project issues

1. Does the paper propose a new opinion mining approach?

Yes

2. Which opinion mining techniques are used (list all of them, clearly stating their name/reference)?

Lexicon-based technique proposed by the authors

3. Which opinion mining approaches in the paper are publicly available? Write down their name and links. If no approach is publicly available, leave it blank or None.

Unfortunately, none. The lexicon they use is a combination ofseveral lexicons (ANEW, OpenFinder, SentiStrength, WN-Affect). They also use NLTK for text processing, and a SnowBall stemmer. Lexicons: ANEW https://csea.phhp.ufl.edu/media/anewmessage.html (requires contacting the authors) OpenFinder ??? SentiStrength http://sentistrength.wlv.ac.uk/ but I could not find the lexicon WN-Affect http://wndomains.fbk.eu/wnaffect.html

4. What is the main goal of the whole study?

advocating monitoring developers’ emotions based on issues and tickets

5. What the researchers want to achieve by applying the technique(s) (e.g., calculate the sentiment polarity of app reviews)?

As #5

6. Which dataset(s) the technique is applied on?

10K issues, all of the issues of nine large well-known projects from GitHub

7. Is/Are the dataset(s) publicly available online? If yes, please indicate their name and links.

No

8. Is the application context (dataset or application domain) different from that for which the technique was originally designed?

Yes, the lexicons are not software-engineering specific

9. Is the performance (precision, recall, run-time, etc.) of the technique verified? If yes, how did they verify it and what are the results?

No

10. Does the paper replicate the results of previous work? If yes, leave a summary of the findings (confirm/partially confirms/contradicts).

No

11. What success metrics are used?

N/A

12. Write down any other comments/notes here.

-