Skip to content

Commit

Permalink
Fix a warning (tloen#186)
Browse files Browse the repository at this point in the history
Avoids the 
"Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning." 
warning
  • Loading branch information
AngainorDev authored Mar 27, 2023
1 parent dbd04f3 commit 69b9d9e
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions finetune.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,7 @@ def train(
model = LlamaForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map=device_map,
)

Expand Down

0 comments on commit 69b9d9e

Please sign in to comment.