There is a newer version of the record available.

Published May 17, 2024 | Version ASE-V1
Publication Open

Artifact of Large Language Model Based Mutations in Genetic Improvement (Journal Version)

Description

Context

This is the artifact of the Large Language Model Based Mutations in Genetic Improvement (Journal Version) paper, following our preliminary work at SSBSE 2023 paper with additional data and functionality.

In our preliminary work, we explored the feasibility of combining the Gin Java GI toolkit with OpenAI LLMs in order to generate an edit for the JCodec tool. In the Journal version, we extend this investigation involving three LLMs, three types of prompts, and five real-world software projects. We sample the edits at random, as well as using local search. We share here the code, the data and all the logs collected during our experiments.

Files

  • The code of gin with support for LLMs queries: gin-llm-llm.zip.
  • A script to restart the Ollama service, following our investigation of why some prompts failed in Section 5.3: restart-ollama.py.
  • The logs recording the output of the edits during ```gin``` run with each of the projects: gin_executions_allstderrs.zip.
    • We further investigated this log manually to pinpoint possible reasons for some prompts failing to suggest a good patch, in Section 5.3.

Note: The code, LLMs prompt and experimental infrastructure, data from the evaluation, and results are available as open source here. The code is also under the ‘llm’ branch of github.com/gintool/gin (commit f2f6e10; branched from master commit 2359f57 pending full integration with Gin).

 

Experimental Setups

We evaluated our idea on the following targets:

Project URL Branch
JCodec github.com/jcodec/jcodec master (7e52834)
JUnit4 github.com/junit-team/junit4 r4.13.2
GSON github.com/google/gson   gson-parent-2.10.1
commons-net github.com/apache/commons-net rel/commons-net-3.10.0
karate github.com/karatelabs/karate v1.4.1  

 

Tested on the following machines/Spec.: 17.0.9 & 3.9.6 & 0.1.27

Project, Search (LLM)   Machine Specification Java Version Maven Version Ollama Version
JCodec, RS (OpenAI, Mistral) AMD Threadripper 3990x, 64C/128T, 128GB, Titan RTX 17.0.10 3.9.6 0.1.28 
JCodec, RS (tinydolphin) Intel Xeon W-2245, 8C/16T, 128GB, RTX 2080 TI 17.0.8 3.9.6 0.1.27
JCodec, LS (*) Intel Xeon 2620v3, 12C/24T, 32GB, Titan X 17.0.9 3.9.0 0.1.24
GSON, RS (OpenAI, Mistral) AMD Threadripper 3990x, 64C/128T, 128GB, Titan RTX 17.0.10 3.9.6 0.1.28 
GSON, RS (tinydolphin) Intel Xeon W-2245, 8C/16T, 128GB, RTX 2080 TI   17.0.8 3.9.6 0.1.27
GSON, LS (*)   Intel Xeon W-2245, 8C/16T, 128GB, RTX 2080 TI  17.0.8 3.9.6 0.1.27
JUnit4, RS & LS (*)   Intel Xeon 2620v3, 12C/24T, 32GB, Titan X   17.0.9 3.9.0 0.1.24
commons-net, RS & LS (*) Intel Xeon 2620v4, 16C/32T, 32GB     17.0.9 3.9.6 0.1.27
karate, RS & LS (*) Intel Xeon 2620v3, 12C/24T, 32GB, Titan X 17.0.9 3.9.0 0.1.24

 

In the table: we specified each machine's specifications for each experiment. The target project describes an experiment, the search type (local search (LS) or random search (RS)) and the LLM model used. An asterisk means the specification refers to all LLM models used.

 

We employed Ollama in its CPU mode to prompt from TinyDolphin (0f9dd11f824c, 637MB) and Mistral (61e88e884507, 4.1GB). We used ChatGPT 3.5.

Files

gin-llm-llm.zip

Files (298.5 MB)

Name Size Download all
md5:0b5bcf9952a6d8711c27f44a50810ebb
29.7 MB Preview Download
md5:e50948b0a7f66e34cc6c5e57c32ae4ab
268.8 MB Preview Download
md5:98f169c2a0e39203a8c666372df363fc
412 Bytes Download