Sentiments Analysis in GitHub Repositories: An Empirical Study

1. Does the paper propose a new opinion mining approach?

No

2. Which opinion mining techniques are used (list all of them, clearly stating their name/reference)?

Sentiment analysis tool from Stanford, based on Recursive Deep Models

3. Which opinion mining approaches in the paper are publicly available? Write down their name and links. If no approach is publicly available, leave it blank or None.

Stanford Sentiment Analysis tool: https://nlp.stanford.edu/sentiment/index.html

4. What is the main goal of the whole study?

This paper proposes an approach to analyze the correlation of sentiment of comments and time required to perform bug fixing, in open source GitHub projects. Data are extracted from commit and pull request discussions.

5. What the researchers want to achieve by applying the technique(s) (e.g., calculate the sentiment polarity of app reviews)?

Compute the sentiment of Github comments in open source projects.

6. Which dataset(s) the technique is applied on?

Github comments from commit and pull request discussions.

7. Is/Are the dataset(s) publicly available online? If yes, please indicate their name and links.

No

8. Is the application context (dataset or application domain) different from that for which the technique was originally designed?

Yes. They use the Stanford tool (trained on movie reviews) on Github comments.

9. Is the performance (precision, recall, run-time, etc.) of the technique verified? If yes, how did they verify it and what are the results?

No, the tool is used off-the-shelf, without preliminary sanity check.

10. Does the paper replicate the results of previous work? If yes, leave a summary of the findings (confirm/partially confirms/contradicts).

No

11. What success metrics are used?

NA

12. Write down any other comments/notes here.

-