forked from modelscope/ms-swift
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
support qwen1.5-moe model (modelscope#627)
- Loading branch information
Showing
10 changed files
with
164 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
14 changes: 14 additions & 0 deletions
14
examples/pytorch/llm/scripts/qwen1half_moe_a2_7b/lora/infer.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# Experimental environment: A100 | ||
# 36GB GPU memory | ||
PYTHONPATH=../../.. \ | ||
CUDA_VISIBLE_DEVICES=0 \ | ||
python llm_infer.py \ | ||
--ckpt_dir "output/qwen1half-moe-a2_7b/vx-xxx/checkpoint-xxx" \ | ||
--load_dataset_config true \ | ||
--use_flash_attn true \ | ||
--max_new_tokens 2048 \ | ||
--temperature 0.1 \ | ||
--top_p 0.7 \ | ||
--repetition_penalty 1. \ | ||
--do_sample true \ | ||
--merge_lora false \ |
31 changes: 31 additions & 0 deletions
31
examples/pytorch/llm/scripts/qwen1half_moe_a2_7b/lora/sft.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
# Experimental environment: A100 | ||
# 42GB GPU memory | ||
PYTHONPATH=../../.. \ | ||
CUDA_VISIBLE_DEVICES=0 \ | ||
python llm_sft.py \ | ||
--model_type qwen1half-moe-a2_7b \ | ||
--sft_type lora \ | ||
--tuner_backend swift \ | ||
--dtype AUTO \ | ||
--output_dir output \ | ||
--dataset dureader-robust-zh \ | ||
--train_dataset_sample -1 \ | ||
--num_train_epochs 1 \ | ||
--max_length 1024 \ | ||
--check_dataset_strategy warning \ | ||
--lora_rank 8 \ | ||
--lora_alpha 32 \ | ||
--lora_dropout_p 0.05 \ | ||
--lora_target_modules ALL \ | ||
--gradient_checkpointing true \ | ||
--batch_size 1 \ | ||
--weight_decay 0.1 \ | ||
--learning_rate 1e-4 \ | ||
--gradient_accumulation_steps 16 \ | ||
--max_grad_norm 0.5 \ | ||
--warmup_ratio 0.03 \ | ||
--eval_steps 100 \ | ||
--save_steps 100 \ | ||
--save_total_limit 2 \ | ||
--logging_steps 10 \ | ||
--use_flash_attn true \ |
14 changes: 14 additions & 0 deletions
14
examples/pytorch/llm/scripts/qwen1half_moe_a2_7b_chat/lora/infer.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# Experimental environment: A100 | ||
# 36GB GPU memory | ||
PYTHONPATH=../../.. \ | ||
CUDA_VISIBLE_DEVICES=0 \ | ||
python llm_infer.py \ | ||
--ckpt_dir "output/qwen1half-moe-a2_7b-chat/vx-xxx/checkpoint-xxx" \ | ||
--load_dataset_config true \ | ||
--use_flash_attn true \ | ||
--max_new_tokens 2048 \ | ||
--temperature 0.1 \ | ||
--top_p 0.7 \ | ||
--repetition_penalty 1. \ | ||
--do_sample true \ | ||
--merge_lora false \ |
31 changes: 31 additions & 0 deletions
31
examples/pytorch/llm/scripts/qwen1half_moe_a2_7b_chat/lora/sft.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
# Experimental environment: A100 | ||
# 42GB GPU memory | ||
PYTHONPATH=../../.. \ | ||
CUDA_VISIBLE_DEVICES=0 \ | ||
python llm_sft.py \ | ||
--model_type qwen1half-moe-a2_7b-chat \ | ||
--sft_type lora \ | ||
--tuner_backend swift \ | ||
--dtype AUTO \ | ||
--output_dir output \ | ||
--dataset blossom-math-zh \ | ||
--train_dataset_sample -1 \ | ||
--num_train_epochs 1 \ | ||
--max_length 1024 \ | ||
--check_dataset_strategy warning \ | ||
--lora_rank 8 \ | ||
--lora_alpha 32 \ | ||
--lora_dropout_p 0.05 \ | ||
--lora_target_modules ALL \ | ||
--gradient_checkpointing true \ | ||
--batch_size 1 \ | ||
--weight_decay 0.1 \ | ||
--learning_rate 1e-4 \ | ||
--gradient_accumulation_steps 16 \ | ||
--max_grad_norm 0.5 \ | ||
--warmup_ratio 0.03 \ | ||
--eval_steps 100 \ | ||
--save_steps 100 \ | ||
--save_total_limit 2 \ | ||
--logging_steps 10 \ | ||
--use_flash_attn true \ |
12 changes: 12 additions & 0 deletions
12
examples/pytorch/llm/scripts/qwen1half_moe_a2_7b_chat_int4/qlora/infer.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
# Experimental environment: A100 | ||
CUDA_VISIBLE_DEVICES=0 \ | ||
swift infer \ | ||
--ckpt_dir "output/qwen1half-moe-a2_7b-chat-int4/vx-xxx/checkpoint-xxx" \ | ||
--load_dataset_config true \ | ||
--use_flash_attn true \ | ||
--max_new_tokens 2048 \ | ||
--temperature 0.1 \ | ||
--top_p 0.7 \ | ||
--repetition_penalty 1. \ | ||
--do_sample true \ | ||
--merge_lora false \ |
28 changes: 28 additions & 0 deletions
28
examples/pytorch/llm/scripts/qwen1half_moe_a2_7b_chat_int4/qlora/sft.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
# Experimental environment: A100 | ||
# 17GB GPU memory | ||
|
||
CUDA_VISIBLE_DEVICES=0 \ | ||
swift sft \ | ||
--model_type qwen1half-moe-a2_7b-chat-int4 \ | ||
--sft_type lora \ | ||
--output_dir output \ | ||
--dataset blossom-math-zh \ | ||
--train_dataset_sample -1 \ | ||
--num_train_epochs 3 \ | ||
--max_length 2048 \ | ||
--lora_rank 8 \ | ||
--lora_alpha 32 \ | ||
--lora_dropout_p 0.05 \ | ||
--lora_target_modules ALL \ | ||
--gradient_checkpointing true \ | ||
--batch_size 1 \ | ||
--weight_decay 0.1 \ | ||
--learning_rate 1e-4 \ | ||
--gradient_accumulation_steps 16 \ | ||
--max_grad_norm 0.5 \ | ||
--warmup_ratio 0.03 \ | ||
--eval_steps 100 \ | ||
--save_steps 100 \ | ||
--save_total_limit 2 \ | ||
--logging_steps 10 \ | ||
--use_flash_attn true \ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters