There is a newer version of the record available.

Published April 28, 2023 | Version v3
Software Open

An Empirical Study of Unit Test Generation using Large Language Models

Creators

  • 1. Authors

Description

Code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both. Although code generation models, e.g., GitHub Copilot, are increasingly being adopted to generate code, it is unclear whether they can successfully be used for unit test generation without fine-tuning. To fill this gap, we investigated how well three generative models, CodeGen, Codex, and GPT-3.5, can generate test cases regarding coverage and quality. We used HumanEval and Evosuite SF110 benchmarks to understand the context’s effect in the unit test generation prompt. We evaluated the models based on compilation rates, test correctness, coverage, and test smells. The codex model gained above 80% coverage for the HumanEval dataset, but no model gained more than 2% coverage for the SF110 benchmark. The generated tests also suffer from test smells like Assertion Roulette and Magic Number Test.

Notes

The previous version contains config.json, which has OpenAI keys. We invalidated them and updated a new version without the config.json.

Files

Empirical_Study_LLM-Based_Unit_Test_Generation.zip

Files (1.2 GB)