The impact of human discussions on just-in-time quality assurance: An empirical study on openstack and eclipse
1. Does the paper propose a new opinion mining approach?
No
2. Which opinion mining techniques are used (list all of them, clearly stating their name/reference)?
SentiStrength
3. Which opinion mining approaches in the paper are publicly available? Write down their name and links. If no approach is publicly available, leave it blank or None.
SentiStrength
4. What is the main goal of the whole study?
to build logistic regression models to study the impact of the characteristics of issue and review discussions on the defect-proneness of a patch
5. What the researchers want to achieve by applying the technique(s) (e.g., calculate the sentiment polarity of app reviews)?
to obtain sentiment scores on the pre-processed comments
6. Which dataset(s) the technique is applied on?
version control, issue report and code review data of openstack and eclipse projects
7. Is/Are the dataset(s) publicly available online? If yes, please indicate their name and links.
link invalid
8. Is the application context (dataset or application domain) different from that for which the technique was originally designed?
yes
9. Is the performance (precision, recall, run-time, etc.) of the technique verified? If yes, how did they verify it and what are the results?
no
10. Does the paper replicate the results of previous work? If yes, leave a summary of the findings (confirm/partially confirms/contradicts).
No
11. What success metrics are used?
N/A
12. Write down any other comments/notes here.
-