What's the difference between this and the original:
- Flash attention support.
- DeepSpeed support.
- A lot of fixes.
Note:
- DeepSpeed ZeRO stage 3 doesn't support BnB 4 and 8 bits quant, so this script automatically sets the bits to 16 when doing ZeRO stage 3 training.
- DeepSpeed ZeRO++ doesn't support BF16, you need to use FP16.
- You need a NEAR COMPLETE DeepSpeed config for ZeRO++ to work or the script will throw a KeyError, because accelerate doesn't automatically generate a config when you manually pass in a DeepSpeed config.
- Doesn't contain support for FSDP.
PEFT training tested with:
- DDP BnB 4 bits with compute dtype BF16 training of LLaMA 2 7B, 13B, 70B, and Mistral 7B on 8x A100 80G(1 node).
- DeepSpeed ZeRO stage 2 BnB 4 bits with compute dtype BF16 training of Mistral 7B on 8x A100 80G(1 node).
- DeepSpeed ZeRO stage 2 BnB 4 bits with compute dtype BF16 training of Mistral 7B on 16x A100 80G(2 nodes).
- DeepSpeed ZeRO stage 3 BF16(stage 3 doesn't support BnB 4 and 8 bits quant) training of Falcon 180B on 8x A100 80G(1 node).
- DeepSpeed ZeRO++ FP16(ZeRO++ qwZ doesn't support BF16) training of Mistral 7B on 16x A100 80G(2 nodes).
| Paper | Adapter Weights | Demo |
This repo supports the paper "QLoRA: Efficient Finetuning of Quantized LLMs", an effort to democratize access to LLM research.
QLoRA uses bitsandbytes for quantization and is integrated with Hugging Face's PEFT and transformers libraries. QLoRA was developed by members of the University of Washington's UW NLP group.
- 7/19/2023 - Added LLaMA 2 example script and updated version requirements.
- 7/18/2023 - Fixed non-frozen embeddings when adding new tokens.
We present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLoRA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters (LoRA). Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU. QLoRA introduces a number of innovations to save memory without sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that is information theoretically optimal for normally distributed weights (b) Double Quantization to reduce the average memory footprint by quantizing the quantization constants, and (c) Paged Optimizers to manage memory spikes. We use QLoRA to finetune more than 1,000 models, providing a detailed analysis of instruction following and chatbot performance across 8 instruction datasets, multiple model types (LLaMA, T5), and model scales that would be infeasible to run with regular finetuning (e.g. 33B and 65B parameter models). Our results show that QLoRA finetuning on a small high-quality dataset leads to state-of-the-art results, even when using smaller models than the previous SoTA. We provide a detailed analysis of chatbot performance based on both human and GPT-4 evaluations showing that GPT-4 evaluations are a cheap and reasonable alternative to human evaluation. Furthermore, we find that current chatbot benchmarks are not trustworthy to accurately evaluate the performance levels of chatbots. We release all of our models and code, including CUDA kernels for 4-bit training.
We release the resources associated with QLoRA finetuning in this repository under MIT license. In addition, we release the Guanaco model family for base LLaMA model sizes of 7B, 13B, 33B, and 65B. These models are intended for purposes in line with the LLaMA license and require access to the LLaMA models.
Guanaco is a system purely intended for research purposes and could produce problematic outputs.
-
Access the live demo here. Note this is the 33B model, the 65B model demo will come later.
-
Or host your own Guanaco gradio demo directly in Colab with this notebook. Works with free GPUs for 7B and 13B models.
-
Alternatively, can you distinguish ChatGPT from Guanaco? Give it a try! You can access the model response Colab here comparing ChatGPT and Guanaco 65B on Vicuna prompts.
To load models in 4bits with transformers and bitsandbytes, you have to install accelerate and transformers from source and make sure you have the latest version of the bitsandbytes library. After installing PyTorch (follow instructions here), you can achieve the above with the following command:
pip install -U -r requirements.txt
The qlora.py
code is a starting point for finetuning and inference on various datasets.
Basic command for finetuning a baseline model on the Alpaca dataset:
python qlora.py --model_name_or_path <path_or_name>
For models larger than 13B, we recommend adjusting the learning rate:
python qlora.py –learning_rate 0.0001 --model_name_or_path <path_or_name>
To replicate our Guanaco models see below.
Here is a blog discussing 4-bit quantization, QLoRA, and how they are integrated in transformers.
You can host your own gradio Guanaco demo directly in Colab following this notebook. In addition, here are Colab notebooks with examples for inference and finetuning using QLoRA:
Other examples are found under the examples/
folder. We include a generation getting started example with guanaco at examples/guanaco_generate.py
.
Quantization parameters are controlled from the BitsandbytesConfig
(see HF documenation) as follows:
- Loading in 4 bits is activated through
load_in_4bit
- The datatype used for the linear layer computations with
bnb_4bit_compute_dtype
- Nested quantization is activated through
bnb_4bit_use_double_quant
- The datatype used for qunatization is specified with
bnb_4bit_quant_type
. Note that there are two supported quantization datatypesfp4
(four bit float) andnf4
(normal four bit float). The latter is theoretically optimal for normally distributed weights and we recommend usingnf4
.
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path='/name/or/path/to/your/model',
load_in_4bit=True,
device_map='auto',
max_memory=max_memory,
torch_dtype=torch.bfloat16,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
You can access the paged optimizer with the argument --optim paged_adamw_32bit
You can select --dataset oasst1
to load the OpenAssistant dataset that was used to train Guanaco. You can also find it on HF at timdettmers/openassistant-guanaco.
We include scripts to reproduce the hyperparameters of Guanaco model training for various sizes at ./scripts/finetune_guanaco*.sh
. Make sure to adjust per_device_train_batch_size
and gradient_accumulation_steps
so that their product is 16 and training fits on your GPUs.
You can specify the path to your dataset using the --dataset
argument. If the --dataset_format
argument is not set, it will default to the Alpaca format. Here are a few examples:
- Training with an alpaca format dataset:
python qlora.py --dataset="path/to/your/dataset"
- Training with a self-instruct format dataset:
python qlora.py --dataset="path/to/your/dataset" --dataset_format="self-instruct"
Multi GPU training and inference work out-of-the-box with Hugging Face's Accelerate. Note that the per_device_train_batch_size
and per_device_eval_batch_size
arguments are global batch sizes unlike what their name suggest.
When loading a model for training or inference on multiple GPUs you should pass something like the following to AutoModelForCausalLM.from_pretrained()
:
device_map = "auto"
max_memory = {i: '46000MB' for i in range(torch.cuda.device_count())}
Here a list of known issues and bugs. If your issue is not reported here, please open a new issue and describe the problem.
- 4-bit inference is slow. Currently, our 4-bit inference implementation is not yet integrated with the 4-bit matrix multiplication
- Resuming a LoRA training run with the Trainer currently not supported by HF.
- Currently, using
bnb_4bit_compute_type='fp16'
can lead to instabilities. For 7B LLaMA, only 80% of finetuning runs complete without error. We have solutions, but they are not integrated yet into bitsandbytes. - Make sure that
tokenizer.bos_token_id = 1
to avoid generation issues. - If you get an this issue ("illegal memory access") then you should use a newer HF LLaMA conversion or downgrade your PyTorch version.
- Problems with adding new tokens outlined in #214. Embeddings need to be updated and stored/reloaded if you are adding new tokens.
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
We thank the Hugging Face team, in particular Younes Belkada, for their support integrating QLoRA with PEFT and transformers libraries. We also thank Meta for releasing the LLaMA models without which this work would not have been possible.
This repo builds on the Stanford Alpaca and LMSYS FastChat repos.