Replication Kit: "Are Unit and Integration Test Definitions Still Valid for Modern Java Projects? An Empirical Study on Open-Source Projects"
Description
Replication Kit for the Paper "Are Unit and Integration Test Definitions Still Valid for Modern Java Projects? An Empirical Study on Open-Source Projects"
This additional material shall provide other researchers with the ability to replicate our results. Furthermore, we want to facilitate further insights that might be generated based on our data sets.
Structure
The structure of the replication kit is as follows:
- additional_visualizations: contains additional visualizations (Venn-Diagrams) for each projects for each of the data sets that we used
- data_analysis: contains two python scripts that we used to analyze our raw data (one for each research question)
- data_collection_tools: contains all source code used for the data collection, including the used versions of the COMFORT framework, the BugFixClassifier, and the used tools of the SmartSHARK environment;
- mongodb_no_authors: Archived dump of our MongoDB that we created by executing our data collection tools. The "comfort" database can be restored via the mongorestore command.
Additional Visualizations
We provide two additional visualizations for each project:
1) <project_name>\_disj\_ieee\_venn (visualizations for the DISJ data set)
2) <project_name>\_all\_ieee\_venn (visualizations for the ALL data set)
For each of these data sets there exist one visualization for each project that shows four Venn-Diagrams for each of the different defect types. These Venn-Diagrams show the number of defects that were detected by either unit, or integration tests (or both).
Furthermore, we added boxplots for each of the data sets (i.e., ALL and DISJ) showing the scores of unit and integration tests for each defect type.
Analysis scripts
Requirements:
- python3.5
- tabulate
- scipy
- seaborn
- mongoengine
- pycoshark
- pandas
- matplotlib
Both python files contain all code for the statistical analysis we performed.
Data Collection Tools
We provide all data collection tools that we have implemented and used throughout our paper. Overall it contains six different projects and one python script:
- BugFixClassifier: Used to classify our defects.
- comfort-core: Core of the comfort framework. Used to classify our tests into unit and integration tests and calculate different metrics for these tests.
- comfort-jacoco-listner: Used to intercept the coverage collection process as we were executing the tests of our case study projects.
- issueSHARK: Used to collect data from the ITSs of the projects.
- pycoSHARK: Library that contains models for the used ORM mapper that is used insight the SmartSHARK environment.
- vcsSHARK: Used to collect data from the VCSs of the projects.