Using Large Language Models to Generate JUnit Tests: An Empirical Study
Description
Code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both. Although code generation models, e.g., GitHub Copilot, are increasingly being adopted to generate code, it is unclear whether they can successfully be used for unit test generation without fine-tuning. To fill this gap, we investigated how well three generative models, CodeGen, Codex, and GPT-3.5, can generate test cases regarding coverage and quality. We used HumanEval and Evosuite SF110 benchmarks to understand the context’s effect in the unit test generation prompt. We evaluated the models based on compilation rates, test correctness, coverage, and test smells. The codex model gained above 80% coverage for the HumanEval dataset, but no model gained more than 2% coverage for the SF110 benchmark. The generated tests also suffer from test smells like Assertion Roulette and Magic Number Test.
Files
JUnit_Tests_Generation_using_LLMs.zip
Files
(1.2 GB)
Name | Size | Download all |
---|---|---|
md5:0833264d2759e5541434544d051bdce1
|
1.2 GB | Preview Download |