Automatically identifying a software product's quality attributes through sentiment analysis of tweets

1. Does the paper propose a new opinion mining approach?

Yes

2. Which opinion mining techniques are used (list all of them, clearly stating their name/reference)?

New approach, used logistic classifier to train on 2000 manually labeled tweets for sentiment, a modified document frequency to extract keywords

3. Which opinion mining approaches in the paper are publicly available? Write down their name and links. If no approach is publicly available, leave it blank or None.

None

4. What is the main goal of the whole study?

To extract the quality attributes of a software product based on the opinions of end-users that have been stated in microblogs such as Twitter

5. What the researchers want to achieve by applying the technique(s) (e.g., calculate the sentiment polarity of app reviews)?

To calculate the sentiment (pos., neg., obj.), and extract keywords from tweets

6. Which dataset(s) the technique is applied on?

4000 tweets: 2000 tweets for each of the following topics from www.tweetarchivist.com: Windows 8, Windows 7, Windows Vista for training, another 2000 for testing

7. Is/Are the dataset(s) publicly available online? If yes, please indicate their name and links.

No

8. Is the application context (dataset or application domain) different from that for which the technique was originally designed?

New approach, N/A

9. Is the performance (precision, recall, run-time, etc.) of the technique verified? If yes, how did they verify it and what are the results?

Sentiment classification: 5-fold cross validation on the training set Keyword extraction: Not verified

10. Does the paper replicate the results of previous work? If yes, leave a summary of the findings (confirm/partially confirms/contradicts).

No

11. What success metrics are used?

Sentiment classification: accuracy of classification on the training set using 5-fold cross-validation

12. Write down any other comments/notes here.

-