Does Co-Development with AI Assistants Lead to More Maintainable Code? Replication Package
Description
This is the replication package for the study "Does Co-Development with AI Assistants Lead to More Maintainable Code?"
Abstract for the registered report:
[Background/Context] AI assistants like GitHub Copilot are transforming software engineering, with several studies highlighting productivity improvements. However, their impact on code quality, particularly in terms of maintainability, requires further investigation.
[Objective/Aim] This study aims to examine the influence of AI assistants on software maintainability, specifically assessing how these tools affect the ability of developers to evolve code.
[Method] We will conduct a two-phased controlled experiment involving professional developers. In Phase 1, developers will add a new feature to a Java project, with or without the aid of an AI assistant. Phase 2, a randomized controlled trial, will involve a different set of developers evolving random Phase 1 projects - working without AI assistants. We will employ Bayesian analysis to evaluate differences in completion time, perceived productivity, code quality, and test coverage.
Note: To maintain the integrity of the study, i.e., preventing any leakage to AI assistants' training data, we choose not to host the code in a public git repository. Instead, all relevant documents and code are shared through a replication package on Zenodo, available as PDF documents generated by repo2pdf (https://github.com/BankkRoll/repo2pdf). We have deliberately used settings to obfuscate the code (e.g., line numbers) to ensure it will not be scraped by any large language models before the study has been completed.
Contents:
- Task 1 instructions.
- Task 2 instructions.
- The source code the participants will receive.
- A causal graph with analysis details.