App reviews: Breaking the user and developer language barrier
1. Does the paper propose a new opinion mining approach?
Yes
2. Which opinion mining techniques are used (list all of them, clearly stating their name/reference)?
The authors implement two binary classifiers, each taking the same word as an argument and returning a binary result per classifier. Each classifier is fed a list of polarised sentiment words (positive or negative) derived from the word sets developed by Hu and Liu. These classifiers identify either the presence or absence of the argument word in the associated polarised word list. This allows us to extend the polarity of the initial dictionary to each word in the corpus.
3. Which opinion mining approaches in the paper are publicly available? Write down their name and links. If no approach is publicly available, leave it blank or None.
Sentiment dictionary by Hu and Liu - Hu, M., & Liu, B. Mining and summarizing customer reviews. In: 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 168–177. ACM (2004)
4. What is the main goal of the whole study?
To develop an ontology for software attributes derived from software quality models. This decomposes into about five thousand words that users employ in reviewing apps. Such vocabulary can be used to extract stakeholder actionable information from large collections of reviews.
5. What the researchers want to achieve by applying the technique(s) (e.g., calculate the sentiment polarity of app reviews)?
Identifying the polarity of the words occurring in the reviews in order to extract all phrases that convey both the quality aspect and sentiment polarity. For each type of quality aspect, they determine if users convey positive or negative sentiments toward it.
6. Which dataset(s) the technique is applied on?
App reviews datasets from previous studies.
7. Is/Are the dataset(s) publicly available online? If yes, please indicate their name and links.
Health & Fitness (H&F) reviews from previous datasets. Specifically, the authors include references to the following studies when reporting about the data source: - Hoon, L., Vasa, R., Martino, G.Y., Schneider, J.G., Mouzakis, K.: Awesome! conveying satisfaction on the app store. In: 25th Australian Computer-Human Interaction Conference (2013) - Harman, M., Jia, Y., & Zhang, Y.: App store mining and analysis: MSR for app stores. In: 9th IEEE Working Conference on Mining Software Repositories, pp. 108–111. IEEE Press (2012) - Hoon, L., Vasa, R., Schneider, J.G., Grundy, J.: An Analysis of the Mobile App Review Landscape: Trends and Implications. Faculty of Information and Communication Technologies, Swinburne University of Technology, Melbourne, Australia, Tech. Rep. http://hdl.handle.net/1959.3/352848 - Hoon, L., Vasa, R., Schneider, J.G., Mouzakis, K.: A Preliminary Analysis of Vocabulary in Mobile App User Reviews. In: 24th Australian Computer-Human Interaction Conference, pp. 245–248 (2012) Additional dataset of 330,408 app reviews from 800 apps across all star ratings, also including the Socrates Dataset (Mouzakis, K.: Hoon, L., Rajesh, V.: Socrates Mobile App Review Dataset (2013))
8. Is the application context (dataset or application domain) different from that for which the technique was originally designed?
The rule-based approach used is an ad hoc methodology defined in the scope of the current study. The sentiment dictionary is a general-purpose sentiment lexicon released by previous research and fine-tuned outside the app review domain.
9. Is the performance (precision, recall, run-time, etc.) of the technique verified? If yes, how did they verify it and what are the results?
No
10. Does the paper replicate the results of previous work? If yes, leave a summary of the findings (confirm/partially confirms/contradicts).
No
11. What success metrics are used?
NA
12. Write down any other comments/notes here.
-