Published September 2024
| Version v2
Software
Open
MLFuzz
Authors/Creators
Description
Artifacts for FSE2025 Paper #139
Table of Contents
- Background
- Install
- Usage
- Reports
Background
MLFuzz is a powerful testing tool designed to identify bugs in SMT (Satisfiability Modulo Theories) solvers.
It utilizes Large Language Models to generate SMT instances and employs differential testing to discover bugs in the solvers under test.
Install
Before getting started, make sure you have the following dependencies installed:
1. Model: Download CodeLlama and Llama, and follow the instructions of Codellama and Llama-2 for installation.
To use the GPT-4-based variant for trial, set your OpenAI API key in the environment. You can do this by adding it as follows:
export OPENAI_API_KEY=your_openai_api_key
2. Python Packages: Install the necessary Python packages using the following command:
pip install torch fairscale fire sentencepiece antlr4-python3-runtime==4.9.2
Usage
Run MLFuzz on Z3 and cvc5 with the following command:
torchrun --nproc_per_node 1 mlfuzz.py --solver1=z3 --solver2=cvc5 --solverbin1=/home/z3/build/z3 \
--solverbin2=/home/cvc5/build/bin/cvc5 --generator=/home/CodeLlama-7b-Instruct/ \
--optimizer=/home/Llama-7b
- --solver1 and --solver2: Specify the solvers under test.
- --solverbin1 and --solverbin2: Provide the paths to the solver binaries.
- --generator and --optimizer: Set the path to generation and optimization models.
Adjust the paths and parameters according to your setup.
To use MLfuzz with GPT-4:
python mlfuzz.py --solver1=z3 --solver2=cvc5 --solverbin1=/home/z3/build/z3 \
--solverbin2=/home/cvc5/build/bin/cvc5
Bug reports
Z3
cvc5
Bitwuzla
| https://github.com/bitwuzla/bitwuzla/issues/92 |
| https://github.com/bitwuzla/bitwuzla/issues/93 |
| https://github.com/bitwuzla/bitwuzla/issues/94 |
| https://github.com/bitwuzla/bitwuzla/issues/95 |
Files
mlfuzz-fse.zip
Files
(3.3 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:4b59ed67949d9b66e1b955a6e78fb918
|
3.3 MB | Preview Download |