LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning
Description
Resources related to the research work "LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning"
1. Dependency
The dependencies are in `requirements.txt`, use`pip install -r reuirements.txt ` to install required libraries.
2. Brief Introduction
LLaMA-Reviewer, an innovative framework that leverages the capabilities of LLaMA, a popular LLM, in the realm of code review. Mindful of resource constraints, this framework employs parameter-efficient fine-tuning (PEFT) methods, delivering high performance while using less than 1% of trainable parameters.
An extensive evaluation of LLaMA-Reviewer is conducted on two diverse, publicly available datasets. Notably, even with the smallest LLaMA base model consisting of 6.7B parameters and a limited number of tuning epochs, LLaMA-Reviewer equals the performance of existing code-review-focused models.
3. Project Structure
- For baselines that don't have immediate results, testing code is provided. The necessary datasets and models can be acquired from the respective repositories given.
- In the case of LoRA, it's necessary to add the `xturing-base-model` to the weights folders.
- For prefix-tuning, the 'lit-llama-base-model' should be added to this location: `prefix(lit-llama)/lit-llama-main/checkpoints/lit-llama/7B`.
- The data has been omitted, but can be generated using the provided code and the original data from the repositories of two papers mentioned in the study.
- The results and outputs are preserved in the provided 7-zip files.
- The base models in the deposit are compressed by volumes, and they can also be obtained as per the guidelines of the corresponding framework.
. ├── baselines # Baselines that don't have immediate results │ ├── AUGER │ ├── CommentFinder │ └── Tufano ├── LoRA(xturing) # Low-rank adaptation experiments, with LoRA rank of 8 or 16 │ ├── r=16 │ └── r=8 ├── prefix(lit-llama) # Prefix-tuning expereiments │ └── lit-llama-main ├── lit-llama-base-model # Base model for prefix-tuning │ └── lit-llama.pth ├── xturing-base-model # Base model for LoRA │ └── pytorch_model.bin └── requirements.txt # Requirements
Note: in the `r=8` folder, "code alpaca" means "PL+NL" data, while "code alpaca only code" means "only PL"
Note: in the `r=16` folder, "code alpaca" means "only PL"
Note(new): Now the PEFT and the finetuning process have been well supported by Huggingface's transformers library, it is more convenient to directly use the transformers library.
Files
README.md
Files
(24.7 GB)
| Name | Size | Download all |
|---|---|---|
|
md5:cfe05fc559c2dd64db85a65906d3a68b
|
7.1 MB | Download |
|
md5:003afdab54e618490d436e7812874b9e
|
1.0 GB | Download |
|
md5:d5b823b9cb185e968887393ae6900f20
|
1.0 GB | Download |
|
md5:3c418413f0e6df6021b6a12bc8f565bf
|
1.0 GB | Download |
|
md5:37bf5129c1e5a5d7932ba214f24b03f8
|
1.0 GB | Download |
|
md5:f99745ffb99a6e4f4fb7bd6132e4f841
|
1.0 GB | Download |
|
md5:f145a673b0e69ac4e39b454699eee6ca
|
1.0 GB | Download |
|
md5:8f083533e52c919e0b479cca1084559d
|
1.0 GB | Download |
|
md5:f7b4e0d552d57aa424c761eef4966a72
|
1.0 GB | Download |
|
md5:1e8396b555043ddc0fb8aa862b6ffbf0
|
1.0 GB | Download |
|
md5:91e5ad1b117f3a4a4a3d891dc94739ae
|
1.0 GB | Download |
|
md5:2d97e7c5c79f68d09616985fe6bf29d9
|
1.0 GB | Download |
|
md5:2b556b210c6f22ac53b012aa0acb999d
|
776.8 MB | Download |
|
md5:5cb31a3b854041d19f672089fc051608
|
234.0 MB | Download |
|
md5:debe4b844cdf3371d61995c541afe066
|
65.8 MB | Download |
|
md5:46b60ddfa515f4d80a07d40875e1ff53
|
2.4 kB | Preview Download |
|
md5:3cb7151d1223a723c9331f04f9a6bd09
|
3.2 kB | Preview Download |
|
md5:3964ad9f56cc04992e2472a5030f6207
|
1.0 GB | Download |
|
md5:77f36f65af6f1753b34042d4ba87e9f7
|
1.0 GB | Download |
|
md5:2c2b8f51b106000b836b19e985b4ed6f
|
1.0 GB | Download |
|
md5:634746a8dc2f896f859a08b2ce0bc814
|
1.0 GB | Download |
|
md5:c8e4fe407cd2aa41eb7ac39b3268b056
|
1.0 GB | Download |
|
md5:de0eae9df0a00aa370a316f787e85174
|
1.0 GB | Download |
|
md5:c4dd485c4f45477c3fb55b90bb51b39b
|
1.0 GB | Download |
|
md5:306194f764dfd6018d38309f84b97b8a
|
1.0 GB | Download |
|
md5:014b4a9db0fb9413adcd6444a23b79f6
|
1.0 GB | Download |
|
md5:5fdada9910c434e43ba900e16c1635e5
|
1.0 GB | Download |
|
md5:7051b9464403b5150a00b3ca06f16a0e
|
1.0 GB | Download |
|
md5:ae440e0076119d3bba705c3b969cf698
|
572.4 MB | Download |