An Empirical Study of Unit Test Generation using Large Language Models
Description
Code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both. Although code generation models, e.g., GitHub Copilot, are increasingly being adopted to generate code, it is unclear whether they can successfully be used for unit test generation without fine-tuning. To fill this gap, we investigated how well three generative models, CodeGen, Codex, and GPT-3.5, can generate test cases regarding coverage and quality. We used HumanEval and Evosuite SF110 benchmarks to understand the context’s effect in the unit test generation prompt. We evaluated the models based on compilation rates, test correctness, coverage, and test smells. The codex model gained above 80% coverage for the HumanEval dataset, but no model gained more than 2% coverage for the SF110 benchmark. The generated tests also suffer from test smells like Assertion Roulette and Magic Number Test.
Notes
Files
Empirical_Study_LLM-Based_Unit_Test_Generation.zip
Files
(1.2 GB)
Name | Size | Download all |
---|---|---|
md5:987352dd45fd8f65f590d885d12b9257
|
1.2 GB | Preview Download |