Conference paper Open Access

On the Effectiveness of Manual and Automatic Unit Test Generation: Ten Years Later

Domenico Serra; Giovanni Grano; Fabio Palomba; Filomena Ferrucci; Harald C. Gall; Alberto Bacchelli

Good unit tests play a paramount role when it comes to foster and evaluate software quality. However, writing effective tests is an extremely costly and time consuming practice. To reduce such a burden for developers, researchers devised ingenious techniques to automatically generate test suite for existing code bases. Nevertheless, how automatically generated test cases fare against manually written ones is an open research question.

In 2008, Bacchelli et al. conducted an initial case study comparing automatic and manually generated test suites. Since in the last ten years we have witnessed a huge amount of work on novel approaches and tools for automatic test generation, in this paper we revise their study using current tools as well as complementing their research method by evaluating these tools' ability in finding regressions.

Preprint of the publication appeared in the proceedings of the 16th International Conference on Mining Software Repositories (MSR 2019), Montréal, Canada, 2019.
Files (163.4 kB)
Name Size
serra19msr.pdf
md5:59e9109a6f9fc8d766f017b97ca2d47e
163.4 kB Download
147
99
views
downloads
All versions This version
Views 147147
Downloads 9999
Data volume 16.2 MB16.2 MB
Unique views 139139
Unique downloads 9393

Share

Cite as