Attributes that predict which features to fix: Lessons for app store mining

1. Does the paper propose a new opinion mining approach?

No

2. Which opinion mining techniques are used (list all of them, clearly stating their name/reference)?

the LIWC dictionary is used for emotion mining: Pennebaker, J. W., Francis, M. E. and Booth, R. J. 2001. Linguistic Inquiry and Word Count: LIWC 2001. Mahway: Lawrence Erlbaum Associates, vol. 71, p. 2001.

3. Which opinion mining approaches in the paper are publicly available? Write down their name and links. If no approach is publicly available, leave it blank or None.

LIWC Dictionary: Pennebaker, J. W., Francis, M. E. and Booth, R. J. 2001. Linguistic Inquiry and Word Count: LIWC 2001. Mahway: Lawrence Erlbaum Associates, vol. 71, p. 2001.

4. What is the main goal of the whole study?

Support app developers by processing app reviews, and focusing on the prioritization of the information and feature requests in app reviews.

5. What the researchers want to achieve by applying the technique(s) (e.g., calculate the sentiment polarity of app reviews)?

Calculate the emotion expressed in single sentences of app reviews.

6. Which dataset(s) the technique is applied on?

Two scraped datasets from the Play store.

7. Is/Are the dataset(s) publicly available online? If yes, please indicate their name and links.

No.

8. Is the application context (dataset or application domain) different from that for which the technique was originally designed?

Yes, LIWC was originally not designed for app reviews.

9. Is the performance (precision, recall, run-time, etc.) of the technique verified? If yes, how did they verify it and what are the results?

No.

10. Does the paper replicate the results of previous work? If yes, leave a summary of the findings (confirm/partially confirms/contradicts).

No.

11. What success metrics are used?

-

12. Write down any other comments/notes here.

-