Skip to content

Commit

Permalink
add citation
Browse files Browse the repository at this point in the history
  • Loading branch information
boyuZh committed Aug 22, 2023
1 parent f04561a commit 57567ad
Show file tree
Hide file tree
Showing 6 changed files with 130 additions and 2 deletions.
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,6 +212,14 @@ on Reasoning, Hallucination, and Interactivity](https://arxiv.org/pdf/2302.04023
journal={FinLLM Symposium at IJCAI 2023},
year={2023}
}
@misc{zhang2023instructfingpt,
title={Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models},
author={Boyu Zhang and Hongyang Yang and Xiao-Yang Liu},
year={2023},
eprint={2306.12659},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```

## Links
Expand Down
89 changes: 89 additions & 0 deletions fingpt/FinGPT_sentiment/instruct-FinGPT/nohup.out
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
usage: train.py [-h]
[--actor-model {1.3b,6.7b,13b,66b,llama-13b,sent-1.3b,sent-llama-7b,sent-llama2-7b}]
[--actor-zero-stage {,0,1,2,3}] [--output-dir OUTPUT_DIR]
[--deployment-type {single_gpu,single_node,multi_node}]
train.py: error: argument --actor-model: invalid choice: 'facebook/sent-opt-1.3b' (choose from '1.3b', '6.7b', '13b', '66b', 'llama-13b', 'sent-1.3b', 'sent-llama-7b', 'sent-llama2-7b')
---=== Running Step Instruction Tuning ===---
bash /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_node/run_sent-1.3b.sh /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b
Running:
bash /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_node/run_sent-1.3b.sh /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b
Traceback (most recent call last):
File "/hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/train.py", line 158, in <module>
main(args)
File "/hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/train.py", line 146, in main
launch_cmd(args, step_num, cmd)
File "/hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/train.py", line 123, in launch_cmd
raise RuntimeError('\n\n'.join((
RuntimeError: Step 1 exited with non-zero status 1

Launch command: bash /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_node/run_sent-1.3b.sh /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b

Log output: /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b/training.log

Please see our tutorial at https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat/training/supervised_finetuning

Please check that you have installed our requirements: `pip install -r requirements.txt`

If you are seeing an OOM error, try modifying /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_node/run_sent-1.3b.sh:

- Reduce `--per_device_*_batch_size`

- Increase `--zero_stage {0,1,2,3}` on multi-gpu setups

- Enable `--gradient_checkpointing` or `--only_optimizer_lora`
---=== Running Step Instruction Tuning ===---
bash /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_gpu/run_sent-1.3b.sh /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b
Running:
bash /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_gpu/run_sent-1.3b.sh /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b
Traceback (most recent call last):
File "/hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/train.py", line 158, in <module>
main(args)
File "/hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/train.py", line 146, in main
launch_cmd(args, step_num, cmd)
File "/hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/train.py", line 123, in launch_cmd
raise RuntimeError('\n\n'.join((
RuntimeError: Step 1 exited with non-zero status 2

Launch command: bash /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_gpu/run_sent-1.3b.sh /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b

Log output: /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b/training.log

Please see our tutorial at https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat/training/supervised_finetuning

Please check that you have installed our requirements: `pip install -r requirements.txt`

If you are seeing an OOM error, try modifying /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_gpu/run_sent-1.3b.sh:

- Reduce `--per_device_*_batch_size`

- Increase `--zero_stage {0,1,2,3}` on multi-gpu setups

- Enable `--gradient_checkpointing` or `--only_optimizer_lora`
---=== Running Step Instruction Tuning ===---
bash /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_gpu/run_sent-1.3b.sh /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b
Running:
bash /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_gpu/run_sent-1.3b.sh /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b
Traceback (most recent call last):
File "/hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/train.py", line 158, in <module>
main(args)
File "/hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/train.py", line 146, in main
launch_cmd(args, step_num, cmd)
File "/hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/train.py", line 123, in launch_cmd
raise RuntimeError('\n\n'.join((
RuntimeError: Step 1 exited with non-zero status 1

Launch command: bash /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_gpu/run_sent-1.3b.sh /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b

Log output: /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/checkpoints/actor-models/sent-1.3b/training.log

Please see our tutorial at https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat/training/supervised_finetuning

Please check that you have installed our requirements: `pip install -r requirements.txt`

If you are seeing an OOM error, try modifying /hpcfs/users/a1232991/FinGPT/fingpt/FinGPT_sentiment/instruct-FinGPT/training/supervised_finetuning/training_scripts/single_gpu/run_sent-1.3b.sh:

- Reduce `--per_device_*_batch_size`

- Increase `--zero_stage {0,1,2,3}` on multi-gpu setups

- Enable `--gradient_checkpointing` or `--only_optimizer_lora`
2 changes: 1 addition & 1 deletion fingpt/FinGPT_sentiment/instruct-FinGPT/train.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

# export HF_HOME=/path/to/huggingface/

python train.py --actor-model decapoda-research/sent-llama-7b-hf --deployment-type single_node --output-dir checkpoints
python train.py --actor-model facebook/opt-sent-1.3b --deployment-type single_gpu --output-dir checkpoints
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
import os
import math
import sys
from datasets import load_dataset

import torch
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,10 @@ if [ "$ZERO_STAGE" == "" ]; then
fi
mkdir -p $OUTPUT

# --data_path zeroshot/twitter-financial-news-sentiment chiapudding/kaggle-financial-sentiment \

deepspeed main.py \
--data_path zeroshot/twitter-financial-news-sentiment chiapudding/kaggle-financial-sentiment \
--data_path /hpcfs/users/a1232991/.cache/huggingface/datasets/chiapudding___kaggle-financial-sentiment /hpcfs/users/a1232991/.cache/huggingface/datasets/zeroshot___twitter-financial-news-sentiment \
--data_split 10,0,0 \
--model_name_or_path decapoda-research/llama-7b-hf \
--per_device_train_batch_size 4 \
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#!/bin/bash
# Copyright (c) Microsoft Corporation.
# SPDX-License-Identifier: Apache-2.0

# DeepSpeed Team
OUTPUT=$1
if [ "$OUTPUT" == "" ]; then
OUTPUT=./output
fi
mkdir -p $OUTPUT

deepspeed --num_gpus 1 main.py \
--data_path zeroshot/twitter-financial-news-sentiment chiapudding/kaggle-financial-sentiment \
--data_split 2,4,4 \
--model_name_or_path facebook/opt-1.3b \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--max_seq_len 512 \
--learning_rate 9.65e-6 \
--weight_decay 0.1 \
--num_train_epochs 2 \
--gradient_accumulation_steps 1 \
--lr_scheduler_type cosine \
--num_warmup_steps 0 \
--seed 1234 \
--deepspeed \
--output_dir $OUTPUT \
&> $OUTPUT/training.log

0 comments on commit 57567ad

Please sign in to comment.