Extracting User-Reported Mobile Application Defects from Online Reviews

1. Does the paper propose a new opinion mining approach?

Yes

2. Which opinion mining techniques are used (list all of them, clearly stating their name/reference)?

A SVM model made by the authors.

3. Which opinion mining approaches in the paper are publicly available? Write down their name and links. If no approach is publicly available, leave it blank or None.

-

4. What is the main goal of the whole study?

Develop a tool that can mine defects from app reviews.

5. What the researchers want to achieve by applying the technique(s) (e.g., calculate the sentiment polarity of app reviews)?

Extract defect information from app reviews.

6. Which dataset(s) the technique is applied on?

A self-made dataset of app reviews from the iOS app store annotated by 9 graduate students.

7. Is/Are the dataset(s) publicly available online? If yes, please indicate their name and links.

-

8. Is the application context (dataset or application domain) different from that for which the technique was originally designed?

No.

9. Is the performance (precision, recall, run-time, etc.) of the technique verified? If yes, how did they verify it and what are the results?

Authors applied the technique on both the sentence level and the review level. Finding that the tool has a prec, rec, F1 of .784, .641, .705 on the review level, and .919, .898, 0.908 on the sentence level. Moreover, the authors also looked at the shortest sentences, reporting a prec, rec, F1 of .912, .931, .921, and a partially annotated set, reporting an prec, rec F1 of .851, .846, .827

10. Does the paper replicate the results of previous work? If yes, leave a summary of the findings (confirm/partially confirms/contradicts).

no.

11. What success metrics are used?

Precision, recall, F1.

12. Write down any other comments/notes here.

-