Mining user rationale from software reviews
1. Does the paper propose a new opinion mining approach?
No
2. Which opinion mining techniques are used (list all of them, clearly stating their name/reference)?
Natural Language Toolkit NLTK and the Stanford Parser. It is not clear which one one f the two was used for sentiment analysis as opposed to other tasks. They do say that the scale is -5..5 which does not match NLTK or Stanford parser as far as I know. Maybe the output of the tools has changed.
3. Which opinion mining approaches in the paper are publicly available? Write down their name and links. If no approach is publicly available, leave it blank or None.
Natural Language Toolkit NLTK http://www.nltk.org/ the Stanford Parser http://nlp.stanford.edu/software/lex-parser.shtml
4. What is the main goal of the whole study?
To identify user rationale in software reviews.
5. What the researchers want to achieve by applying the technique(s) (e.g., calculate the sentiment polarity of app reviews)?
Compute the review body lexical sentiment score and the title sentiment
6. Which dataset(s) the technique is applied on?
52 apps, 32414 reviews and 135395 sentences
7. Is/Are the dataset(s) publicly available online? If yes, please indicate their name and links.
https://mast.informatik.uni-hamburg.de/app-review-analysis/ -> Mining user rationale - coding guidelines but not the dataset
8. Is the application context (dataset or application domain) different from that for which the technique was originally designed?
Kind of, again reviews
9. Is the performance (precision, recall, run-time, etc.) of the technique verified? If yes, how did they verify it and what are the results?
No
10. Does the paper replicate the results of previous work? If yes, leave a summary of the findings (confirm/partially confirms/contradicts).
No
11. What success metrics are used?
N/A
12. Write down any other comments/notes here.
-