Public data source like yahoo is flawed, it might miss data for stock which is delisted and it might has data which is wrong. This can introduce survivorship bias into our training process.
The crowd sourced data is introduced to merged data from multiple data source and cross validate against each other, so that:
- We will have a more complete history record.
- We can identify the anomaly data and apply correction when necessary.
The raw data is hosted on dolthub repo: https://www.dolthub.com/repositories/chenditc/investment_data
The processing script and sql is hosted on github repo: https://github.com/chenditc/investment_data
The pakcaged docker runtime is hosted on dockerhub: https://hub.docker.com/repository/docker/chenditc/investment_data
User can download data in qlib bin format and use it directly: https://github.com/chenditc/investment_data/releases/tag/20220720
wget https://github.com/chenditc/investment_data/releases/download/20220720/qlib_bin.tar.gz
tar -zxvf qlib_bin.tar.gz -C ~/.qlib/qlib_data/cn_data --strip-components=2
Dolthub data will be update daily, so that if user wants to get up to date data, they can dump qlib bin using docker:
docker run -v /<some output directory>:/output -it --rm chenditc/investment_data bash dump_qlib_bin.sh && cp ./qlib_bin.tar.gz /output/
See: https://github.com/chenditc/investment_data/blob/main/README.md