Conference paper Open Access

# On the Effectiveness of Manual and Automatic Unit Test Generation: Ten Years Later

Domenico Serra; Giovanni Grano; Fabio Palomba; Filomena Ferrucci; Harald C. Gall; Alberto Bacchelli

### Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Domenico Serra</dc:creator>
<dc:creator>Giovanni Grano</dc:creator>
<dc:creator>Fabio Palomba</dc:creator>
<dc:creator>Filomena Ferrucci</dc:creator>
<dc:creator>Harald C. Gall</dc:creator>
<dc:creator>Alberto Bacchelli</dc:creator>
<dc:date>2019-05-26</dc:date>
<dc:description>Good unit tests play a paramount role when it comes to foster and evaluate software quality. However, writing effective tests is an extremely costly and time consuming practice. To reduce such a burden for developers, researchers devised ingenious techniques to automatically generate test suite for existing code bases. Nevertheless, how automatically generated test cases fare against manually written ones is an open research question.

In 2008, Bacchelli et al. conducted an initial case study comparing automatic and manually generated test suites. Since in the last ten years we have witnessed a huge amount of work on novel approaches and tools for automatic test generation, in this paper we revise their study using current tools as well as complementing their research method by evaluating these tools' ability in finding regressions.</dc:description>
<dc:description>Preprint of the publication appeared in the proceedings of the 16th International Conference on Mining Software Repositories (MSR 2019), Montréal, Canada, 2019.</dc:description>
<dc:identifier>https://zenodo.org/record/2595232</dc:identifier>
<dc:identifier>10.5281/zenodo.2595232</dc:identifier>
<dc:identifier>oai:zenodo.org:2595232</dc:identifier>
<dc:language>eng</dc:language>
<dc:relation>info:eu-repo/grantAgreement/SNSF/Careers/PP00P2_170529/</dc:relation>
<dc:relation>doi:10.5281/zenodo.2595231</dc:relation>
<dc:relation>url:https://zenodo.org/communities/empirical-software-engineering</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:subject>Software Testing</dc:subject>
<dc:subject>Automatic Test Case Generation</dc:subject>
<dc:subject>Empirical Studies</dc:subject>
<dc:title>On the Effectiveness of Manual and Automatic Unit Test Generation: Ten Years Later</dc:title>
<dc:type>info:eu-repo/semantics/conferencePaper</dc:type>
<dc:type>publication-conferencepaper</dc:type>
</oai_dc:dc>

321
208
views