Published August 4, 2024 | Version v4.0.0
Software Open

CURLoRA: Leveraging CUR Matrix Decomposition for Stable LLM Continual Fine-Tuning and Catastrophic Forgetting Mitigation

Description

Title:

CURLoRA: Leveraging CUR Matrix Decomposition for Stable LLM Continual Fine-Tuning and Catastrophic Forgetting Mitigation

Description:

This repo contains the code for the CURLoRA research paper, a novel approach to fine-tuning large language models (LLMs) that leverages CUR matrix decomposition in the context of Low-Rank Adaptation (LoRA). Our method addresses two critical challenges in LLM fine-tuning: mitigating catastrophic forgetting during continual learning and reducing the number of trainable parameters. We propose a unique modification to the CUR decomposition process to enable a more efficient and stable way to adapt LLMs to new tasks without compromising any existing knowledge. We demonstrate through experiments on multiple datasets that CURLoRA outperforms standard LoRA in mitigating catastrophic forgetting. It maintains model stability and performance across tasks while significantly reducing the number of trainable parameters. Our results show that CURLoRA achieves superior accuracy and perplexity scores compared to LoRA, particularly in scenarios with limited data.

What's New per Release:

  • v1.0.0: Initial release with stable code implementation
  • v2.0.0: Conducting more experiments using more ranks and adding the results to the paper. Adding more description and explanations to the core idea so that it is clearer.
  • v3.0.0: Conducting more experiments with GPT2 model and integrating the results in the paper and adding the experiment code.
  • v4.0.0: Adding the code for fine-tuning GPT2-Large for Q&A with CURLoRA and SFTTrainer on SQuAD dataset.

Key Features:

Implementation of the CURLoRA approach. Leveraging modified CUR matrix decomposition for stable LLM continual fine-tuning and catastrophic forgetting mitigation. Comparing LoRA vs CURLoRA in LLMs continual learning and catastrophic forgetting using multiple tasks/datasets.

Contents:

  • CURLoRA.pdf: Research paper detailing the methodology, math, theoretical analysis, and experimental results of CURLoRA.
  • code/: Directory containing the implementation of CURLoRA and the experiments.
    • code/curlora.py: Containing CURLoRA classes.
    • code/utils.py: Helper functions.
    • code/lora.py: LoRA classes.
    • code/curlora_experiment.ipynb: CURLoRA experiment with Mistral 7B (Fine-tuning on MRPC, SST-2 and Sentiment140).
    • code/curlora_experiment-gpt.ipynb: CURLoRA experiment with GPT2-Large (Fine-tuning on MRPC, SST-2 and Sentiment140).
    • code/squad_gpt-curlora.ipynb: Fine-tuning GPT2-Large for Q&A with CURLoRA and SFTTrainer on SQuAD dataset.

Same notebooks are available for LoRA.

Contributors:

Muhammad Fawi

Citation:

If you find CURLoRA research or code helpful, please consider citing them.

  • Code:
Fawi, M. (2024). CURLoRA: Leveraging CUR Matrix Decomposition for Stable LLM Continual Fine-Tuning and Catastrophic Forgetting Mitigation (v4.0.0) [Computer software]. Zenodo. https://doi.org/10.5281/zenodo.12729738
  • Research:
Fawi, M. (2024). CURLoRA: Leveraging CUR Matrix Decomposition for Stable LLM Continual Fine-Tuning and Catastrophic Forgetting Mitigation. Zenodo. https://doi.org/10.5281/zenodo.12730055

Files

MNoorFawi/curlora-v4.0.0.zip

Files (207.8 kB)

Name Size Download all
md5:8dbab44ebaf7d937f1bb812f45b7a58c
207.8 kB Preview Download

Additional details

Related works

Software

Repository URL
https://github.com/MNoorFawi/curlora/
Programming language
Python