Let us DO NOT expect Wall Street to open-source LLMs nor open APIs, due to FinTech institutes' internal regulations and policies.
Disclaimer: We are sharing codes for academic purposes under the MIT education license. Nothing herein is financial advice, and NOT a recommendation to trade real money. Please use common sense and always first consult a professional before trading or investing.
1). Finance is highly dynamic. BloombergGPT trained an LLM using a mixture of finance data and general-purpose data, which took about 53 days, at a cost of around $3M). It is costly to retrain an LLM model like BloombergGPT every month or every week, thus lightweight adaptation is highly favorable. FinGPT can be fine-tuned swiftly to incorporate new data (the cost falls significantly, less than $300 per fine-tuning).
2). Democratizing Internet-scale financial data is critical, say allowing timely updates of the model (monthly or weekly updates) using an automatic data curation pipeline. BloombergGPT has privileged data access and APIs, while FinGPT presents a more accessible alternative. It prioritizes lightweight adaptation, leveraging the best available open-source LLMs.
3). The key technology is "RLHF (Reinforcement learning from human feedback)", which is missing in BloombergGPT. RLHF enables an LLM model to learn individual preferences (risk-aversion level, investing habits, personalized robo-advisor, etc.), which is the "secret" ingredient of ChatGPT and GPT4.
-
FinGPT V3 (Updated on 8/4/2023)
- FinGPT v3 series are LLMs finetuned with the LoRA method on the News and Tweets sentiment analysis dataset which achieve the best scores on most of the financial sentiment analysis datasets with low cost.
- FinGPT v3.1 uses chatglm2-6B as base model; FinGPT v3.2 uses llama2-7b as base model
- Benchmark Results:
Weighted F1 BloombergGPT ChatGLM2 Llama2 FinGPT v3.1 v3.1.1 (8bit) v3.1.2 (QLoRA) FinGPT v3.2 FPB 0.511 0.381 0.390 0.855 0.855 0.777 0.850 FiQA-SA 0.751 0.790 0.800 0.850 0.847 0.752 0.860 TFNS - 0.189 0.296 0.875 0.879 0.828 0.894 NWGI - 0.449 0.503 0.642 0.632 0.583 0.636 Devices 512 × A100 64 × A100 2048 × A100 8 × A100 A100 A100 8 × A100 Time 53 days 2.5 days 21 days 2 hours 6.47 hours 4.15 hours 2 hours Cost $2.67 million $ 14,976 $4.23 million $65.6 $25.88 $17.01 $65.6
Cost per GPU hour. For A100 GPUs, the AWS p4d.24xlarge instance, equipped with 8 A100 GPUs is used as a benchmark to estimate the costs. Note that BloombergGPT also used p4d.24xlarge As of July 11, 2023, the hourly rate for this instance stands at $32.773. Consequently, the estimated cost per GPU hour comes to $32.77 divided by 8, resulting in approximately $4.10. With this value as the reference unit price (1 GPU hour). BloombergGPT estimated cost= 512 x 53 x 24 = 651,264 GPU hours x $4.10 = $2,670,182.40
-
Reproduce the results by running benchmarks, and the detailed tutorial is on the way.
-
Finetune your own FinGPT v3 model with the LoRA method on only an RTX 3090 with this notebook in 8bit or this notebook in int4 (QLoRA)
-
- FinGPT by finetuning ChatGLM2 / Llama2 with LoRA with the market-labeled data for Chinese Market
[Training] Beginner’s Guide to FinGPT: Training with LoRA and ChatGLM2–6B One Notebook, $10 GPU
- FinGPT: Powering the Future of Finance with 20 Cutting-Edge Applications
- FinGPT I: Why We Built the First Open-Source Large Language Model for Finance
- FinGPT II: Cracking the Financial Sentiment Analysis Task Using Instruction Tuning of General-Purpose Large Language Models
- Data source layer: This layer assures comprehensive market coverage, addressing the temporal sensitivity of financial data through real-time information capture.
- Data engineering layer: Primed for real-time NLP data processing, this layer tackles the inherent challenges of high temporal sensitivity and low signal-to-noise ratio in financial data.
- LLMs layer: Focusing on a range of fine-tuning methodologies such as LoRA, this layer mitigates the highly dynamic nature of financial data, ensuring the model’s relevance and accuracy.
- Task layer: This layer is responsible for executing fundamental tasks. These tasks serve as the benchmarks for performance evaluations and cross-comparisons in the realm of FinLLMs
- Application layer: Showcasing practical applications and demos, this layer highlights the potential capability of FinGPT in the financial sector.
- FinGPT Framework: Open-Source Financial Large Language Models
- FinGPT-RAG: We present a retrieval-augmented large language model framework specifically designed for financial sentiment analysis, optimizing information depth and context through external knowledge retrieval, thereby ensuring nuanced predictions.
- FinGPT-FinNLP: FinNLP provides a playground for all people interested in LLMs and NLP in Finance. Here we provide full pipelines for LLM training and finetuning in the field of finance. The full architecture is shown in the following picture. Detail codes and introductions can be found here. Or you may refer to the wiki
- FinGPT-Benchmark: We introduce a novel Instruction Tuning paradigm optimized for open-source Large Language Models (LLMs) in finance, enhancing their adaptability to diverse financial datasets while also facilitating cost-effective, systematic benchmarking from task-specific, multi-task, and zero-shot instruction tuning tasks.
- Feel free to contribute more open-source base models tailored for various language-specific financial markets.
Base Model | Pretraining Tokens | Context Length | Model Advantages | Model Size | Experiment Results | Applications |
---|---|---|---|---|---|---|
Llama-2 | 2 Trillion | 4096 | Llama-2 excels on English-based market data | llama-2-7b and Llama-2-13b | llama-2 consistently shows superior fine-tuning results | Financial Sentiment Analysis, Robo-Advisor |
Falcon | 1,500B | 2048 | Maintains high-quality results while being more resource-efficient | falcon-7b | Good for English market data | Financial Sentiment Analysis |
MPT | 1T | 2048 | MPT models can be trained with high throughput efficiency and stable convergence | mpt-7b | Good for English market data | Financial Sentiment Analysis |
Bloom | 366B | 2048 | World’s largest open multilingual language model | bloom-7b1 | Good for English market data | Financial Sentiment Analysis |
ChatGLM2 | 1.4T | 32K | Exceptional capability for Chinese language expression | chatglm2-6b | Shows prowess for Chinese market data | Financial Sentiment Analysis, Financial Report Summary |
Qwen | 2.2T | 8k | Fast response and high accuracy | qwen-7b | Effective for Chinese market data | Financial Sentiment Analysis |
InternLM | 1.8T | 8k | Can flexibly and independently construct workflows | internlm-7b | Effective for Chinese market data | Financial Sentiment Analysis |
- Benchmark Results for the above open-source Base Models in the financial sentiment analysis task using the same instruction template for SFT (LoRA):
Weighted F1/Acc Llama2 Falcon MPT Bloom ChatGLM2 Qwen InternLM FPB 0.863/0.863 0.846/0.849 0.872/0.872 0.810/0.810 0.850/0.849 0.854/0.854 0.709/0.714 FiQA-SA 0.871/0.855 0.840/0.811 0.863/0.844 0.771/0.753 0.864/0.862 0.867/0.851 0.679/0.687 TFNS 0.896/0.895 0.893/0.893 0.907/0.907 0.840/0.840 0.859/0.858 0.883/0.882 0.729/0.731 NWGI 0.649/0.651 0.636/0.638 0.640/0.641 0.573/0.574 0.619/0.629 0.638/0.643 0.498/0.503
- Columbia Perspectives on ChatGPT
- [MIT Technology Review] ChatGPT is about to revolutionize the economy. We need to decide what that looks like
- [BloombergGPT] BloombergGPT: A Large Language Model for Finance
- [Finextra] ChatGPT and Bing AI to sit as panellists at fintech conference
- [YouTube video] I Built a Trading Bot with ChatGPT, combining ChatGPT and FinRL.
- Hey, ChatGPT! Explain FinRL code to me!
- Sparks of artificial general intelligence: Early experiments with GPT-4
- [GPT-4] GPT-4 Technical Report
- [InstructGPT] Training language models to follow instructions with human feedback NeurIPS 2022.
The Journey of Open AI GPT models. GPT models explained. Open AI's GPT-1, GPT-2, GPT-3.
- [GPT-3] Language models are few-shot learners NeurIPS 2020.
- [GPT-2] Language Models are Unsupervised Multitask Learners
- [GPT-1] Improving Language Understanding by Generative Pre-Training
- [Transformer] Attention is All you Need NeurIPS 2017.
-
[BloombergGPT] BloombergGPT: A Large Language Model for Finance
-
WHAT’S IN MY AI? A Comprehensive Analysis of Datasets Used to Train GPT-1, GPT-2, GPT-3, GPT-NeoX-20B, Megatron-11B, MT-NLG, and Gopher
-
FinRL-Meta Repo and paper FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning. Advances in Neural Information Processing Systems, 2022.
-
[AI4Finance] FinNLP Democratizing Internet-scale financial data.
- GPT-3 Creative Fiction Creative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling. Plus advice on effective GPT-3 prompt programming & avoiding common errors.
ChatGPT Trading Bot
- [YouTube video] ChatGPT Trading strategy 20097% returns
- [YouTube video] ChatGPT Coding - Make A Profitable Trading Strategy In Five Minutes!
- [YouTube video] Easy Automated Live Trading using ChatGPT (+9660.3% hands free)
- [YouTube video] ChatGPT Trading Strategy 893% Returns
- [YouTube video] ChatGPT 10 Million Trading Strategy
- [YouTube video] ChatGPT: Your Crypto Assistant
- [YouTube video] Generate Insane Trading Returns with ChatGPT and TradingView
@article{yang2023fingpt,
title={FinGPT: Open-Source Financial Large Language Models},
author={Yang, Hongyang and Liu, Xiao-Yang and Wang, Christina Dan},
journal={FinLLM Symposium at IJCAI 2023},
year={2023}
}
@article{zhang2023instructfingpt,
title={Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models},
author={Boyu Zhang and Hongyang Yang and Xiao-Yang Liu},
journal={FinLLM Symposium at IJCAI 2023},
year={2023}
}
@article{zhang2023fingptrag,
title={Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language Models},
author={Zhang, Boyu and Yang, Hongyang and Zhou, tianyu and Babar, Ali and Liu, Xiao-Yang},
journal = {ACM International Conference on AI in Finance (ICAIF)},
year={2023}
}