Skip to content

InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.

License

Notifications You must be signed in to change notification settings

AllenMao/InternEvo

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

InternEvo

Latest News 🔥

  • 2024/08/29: InternEvo supports streaming dataset of huggingface format. Add detailed instructions of data flow.

  • 2024/04/17: InternEvo supports training model on NPU-910B cluster.

  • 2024/01/17: To delve deeper into the InternLM series of models, please check InternLM in our organization.

Introduction

InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies. With a single codebase, it supports pre-training on large-scale clusters with thousands of GPUs, and fine-tuning on a single GPU while achieving remarkable performance optimizations. InternEvo achieves nearly 90% acceleration efficiency during training on 1024 GPUs.

Based on the InternEvo training framework, we are continually releasing a variety of large language models, including the InternLM-7B series and InternLM-20B series, which significantly outperform numerous renowned open-source LLMs such as LLaMA and other leading models in the field.

Installation

First, install the specified versions of torch, torchvision, torchaudio, and torch-scatter. For example:

pip install --extra-index-url https://download.pytorch.org/whl/cu118 torch==2.1.0+cu118 torchvision==0.16.0+cu118 torchaudio==2.1.0+cu118
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.1.0+cu118.html

Install InternEvo:

pip install InternEvo

Install flash-attention (version v2.2.1):

If you need to use flash-attention to accelerate training, and it is supported in your environment, install as follows:

pip install flash-attn==2.2.1

For more detailed information about installation environment or source code installation, please refer to Install Tutorial

Quick Start

Train Script

Firstly, prepare training script as train.py

For more detailed explanation, please refer to Training Tutorial

Data Preparation

Secondly, prepare data for training or fine-tuning.

Download dataset from huggingface, take roneneldan/TinyStories dataset as example:

huggingface-cli download --repo-type dataset --resume-download "roneneldan/TinyStories" --local-dir "/mnt/petrelfs/hf-TinyStories"

Achieve tokenizer to local path. For example, download special_tokens_map.json、tokenizer.model、tokenizer_config.json、tokenization_internlm2.py and tokenization_internlm2_fast.py from https://huggingface.co/internlm/internlm2-7b/tree/main to local /mnt/petrelfs/hf-internlm2-tokenizer .

Then modify configuration file as follows:

TRAIN_FOLDER = "/mnt/petrelfs/hf-TinyStories"
data = dict(
    type="streaming",
    tokenizer_path="/mnt/petrelfs/hf-internlm2-tokenizer",
)

For other type dataset preparation, please refer to Usage Tutorial

Configuration File

The content of configuration file is as 7B_sft.py

For more detailed introduction, please refer to Usage Tutorial

Train Start

Training can be started on slurm or torch distributed environment.

On slurm, using 2 nodes and 16 cards, the command is as follows:

$ srun -p internllm -N 2 -n 16 --ntasks-per-node=8 --gpus-per-task=1 python train.py --config ./configs/7B_sft.py

On torch, using 1 node and 8 cards, the command is as follows:

$ torchrun --nnodes=1 --nproc_per_node=8 train.py --config ./configs/7B_sft.py --launcher "torch"

System Architecture

Please refer to the System Architecture document for architecture details.

Feature Zoo

InternEvo Feature Zoo
Data Model Parallel Tool
  • Tokenized
  • Streaming
  • ZeRO 1.5
  • 1F1B Pipeline Parallel
  • PyTorch FSDP Training
  • Megatron-LM Tensor Parallel (MTP)
  • Megatron-LM Sequence Parallel (MSP)
  • Flash-Attn Sequence Parallel (FSP)
  • Intern Sequence Parallel (ISP)
  • Memory Profiling

Common Tips

Item Introduction
Parallel Computing Loss link

Contribution

We appreciate all the contributors for their efforts to improve and enhance InternEvo. Community users are highly encouraged to participate in the project. Please refer to the contribution guidelines for instructions on how to contribute to the project.

Acknowledgements

InternEvo codebase is an open-source project contributed by Shanghai AI Laboratory and researchers from different universities and companies. We would like to thank all the contributors for their support in adding new features to the project and the users for providing valuable feedback. We hope that this toolkit and benchmark can provide the community with flexible and efficient code tools for fine-tuning InternEvo and developing their own models, thus continuously contributing to the open-source community. Special thanks to the two open-source projects, flash-attention and ColossalAI.

Citation

@misc{2023internlm,
    title={InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities},
    author={InternLM Team},
    howpublished = {\url{https://github.com/InternLM/InternLM}},
    year={2023}
}

About

InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.1%
  • C++ 1.2%
  • Cuda 1.1%
  • Other 0.6%