Automatically classifying requirements from app stores: A preliminary study

1. Does the paper propose a new opinion mining approach?

Yes

2. Which opinion mining techniques are used (list all of them, clearly stating their name/reference)?

three SSC methods (Self-Training, RASCO, Rel-RASCO) and four base classifiers (KNN, C4.5, NB, SMO(SVM with the Sequential Minimal Optimization))

3. Which opinion mining approaches in the paper are publicly available? Write down their name and links. If no approach is publicly available, leave it blank or None.

None

4. What is the main goal of the whole study?

to automate the classification of functional and non-functional requirements contained in reviews in the App Store

5. What the researchers want to achieve by applying the technique(s) (e.g., calculate the sentiment polarity of app reviews)?

same as 4

6. Which dataset(s) the technique is applied on?

300 reviews as our ground-truth set with 150 instances for each class { functional, nonfunctional }

7. Is/Are the dataset(s) publicly available online? If yes, please indicate their name and links.

No

8. Is the application context (dataset or application domain) different from that for which the technique was originally designed?

retrained

9. Is the performance (precision, recall, run-time, etc.) of the technique verified? If yes, how did they verify it and what are the results?

evaluated the classification performances of the three SSC methods and four base classifiers with four different training ratios (10%, 30%, 50% and 70%).

10. Does the paper replicate the results of previous work? If yes, leave a summary of the findings (confirm/partially confirms/contradicts).

No

11. What success metrics are used?

Transductive Accuracy: predicting unlabeled instances in the training sample. Inductive Accuracy: predicting unseen test data

12. Write down any other comments/notes here.

-