Skip to content

Commit

Permalink
更新学习率
Browse files Browse the repository at this point in the history
  • Loading branch information
zRzRzRzRzRzRzR committed Mar 28, 2024
1 parent 0a76162 commit b54a62d
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
4 changes: 2 additions & 2 deletions finetune_demo/configs/lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ training_args:
output_dir: ./output
max_steps: 3000
# needed to be fit for the dataset
learning_rate: 3e-4
learning_rate: 5e-5
# settings for data loading
per_device_train_batch_size: 4
dataloader_num_workers: 16
Expand Down Expand Up @@ -41,6 +41,6 @@ training_args:
peft_config:
peft_type: LORA
task_type: CAUSAL_LM
r: 32
r: 8
lora_alpha: 32
lora_dropout: 0.1
2 changes: 1 addition & 1 deletion finetune_demo/configs/ptuning_v2.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ training_args:
output_dir: ./output
max_steps: 3000
# needed to be fit for the dataset
learning_rate: 3e-4
learning_rate: 5e-5
# settings for data loading
per_device_train_batch_size: 4
dataloader_num_workers: 16
Expand Down
2 changes: 1 addition & 1 deletion finetune_demo/configs/sft.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ training_args:
output_dir: ./output
max_steps: 3000
# needed to be fit for the dataset
learning_rate: 3e-4
learning_rate: 5e-5
# settings for data loading
per_device_train_batch_size: 4
dataloader_num_workers: 16
Expand Down

0 comments on commit b54a62d

Please sign in to comment.