The test module supports:
- Unit testing for different backends and operation modes.
- Functional testing for multiple models, strategies, and heterogeneous hardware.
- Monitoring incremental code test coverage and viewing test reports online.
- Ensuring code style consistency.
This section introduces how to use these features.
tests/scripts/unit_tests/test_subset.sh --backend ${BACKEND} --subset ${SUBSET}
Please set the following variables:
BACKEND
: Specifies the backend for unit testing, eithermegatron
orflagscale
.SUBSET
: The directory for unit tests. Check the directories withintests/unit_tests
andmegatron/tests/unit_tests
for specific folders. Note:./
represents the root directory of the above folders.
tests/scripts/unit_tests/test_all.sh
When you create a PR using your forked repository, the testing workflow will automatically trigger. Find the corresponding action for your PR in All Tests to view the results.
-
Adding a Single Unit Test File
- Directly add a file named
test_${NEW_TEST}.py
in the appropriate directory.NEW_TEST
refers to the name of the new test.
- Directly add a file named
-
Adding a Unit Test Directory
-
Add a test directory and files in the appropriate location. For
flagscale
, the path istests/unit_tests/${NEW_FOLD}
. Formegatron
, it'smegatron/tests/unit_tests/${NEW_FOLD}
.NEW_FOLD
refers to the name of the new test folder. -
Update the configuration file
tests/scripts/unit_tests/config.yml
to include configuration for the directory, specifyingignore
,type
, anddepth
as needed. Unspecified parameters will default to pre-defined settings. Below is the configuration file explanation:# backend: The backend for unit testing, either flagscale or megatron megatron: # Set the environment required before running unit tests set_environment: cd megatron; export PYTHONPATH=..:$PYTHONPATH # Specify the target folder for test coverage coverage: core # Select different tests for different test directories subset: # Use default configuration if not shown dist_checkpointing: # Files to ignore during testing ignore: models/test_mamba.py models: ignore: test_mamba_model.py transformer/moe: # Test mode: # batch (default): Run all test files at once # single: Run each test file individually # NOTE: Batch mode runs faster, single mode avoids environment interference among tests type: single ignore: test_upcycling.py transformer: # Test depth # all (default): All test files within the directory # Integer: Test files at the specified path depth # NOTE: Useful for running test files within a folder, rather than in subdirectories depth: 1 ...
-
Online Test Configuration
Modify the workflow configuration in
.github/workflows/all-tests.yml
to activate online testing:... # Megatron Unit Tests with Matrix megatron-unit-tests: uses: ./.github/workflows/unit-tests.yml strategy: matrix: subset: # Add your new folder if you have a new test directory - {NEW_FOLD} - data - dist_checkpointing - distributed - fusions - inference - models - pipeline_parallel - tensor_parallel - transformer/moe - transformer - ./ name: "megatron-${{ matrix.subset == './' && 'root' || matrix.subset }}" with: backend: megatron subset: ${{ matrix.subset }} ...
-
-
View Locally:
Open the following in a browser:
/workspace/report/${ID}/cov-report-${BACKEND}/index.html
ID
: Use0
when running locally.BACKEND
:flagscale
ormegatron
.
-
View Online:
Find the corresponding action for your PR in All Tests, open any unit test under
flagscale
ormegatron
, and click the address provided underUnit Test Coverage Online Report
to view the test report.
-
View Locally:
-
Run the command:
# Ensure unit tests have been run locally before executing this command ./tests/scripts/unit_tests/test_coverage.sh --backend ${BACKEND} --status ${STATUS}
Please set the following variables:
BACKEND
:flagscale
ormegatron
.STATUS
:online
oroffline
.
-
View the report:
Open the following in a browser:
/workspace/report/${ID}/diff-cover-report-${BACKEND}.html
Use these variables:ID
: Use0
when running locally.BACKEND
:flagscale
ormegatron
.
-
-
View Online:
Find the corresponding action for your PR in All Tests, open the
flagscale-coverage-test
ormegatron-coverage-test
jobs, and click on the address underCoverage Online Report
to view the test report online.
tests/scripts/functional_tests/test_model.sh --type ${TYPE} --model ${MODEL}
Please set the following variables:
TYPE
: The type of functional testing, supportingtrain
orhetero_train
.MODEL
: The model used for functional testing, in conjunction withTYPE
. Specific models can be found under thetests/functional_tests/test_cases
directory.
tests/scripts/functional_tests/test_all.sh
Find the corresponding action for your PR in All Tests to view the results.
-
Update the functional test configuration file
tests/scripts/functional_tests/config.yml
to include relevant experiment configurations:... # Hardware mode: homogeneous or heterogeneous train: # Models used aquila: test_cases: # Parallel modes -tp2_pp2 -tp4_pp2 ...
-
Add configuration files and test results in the appropriate directory. Directory file structure:
tests/functional_tests/test_cases/train/aquila ├── conf │ ├── tp2_pp2.yaml # Environment configuration file │ ├── tp4_pp2.yaml │ └── train │ ├── tp2_pp2.yaml # Model configuration file │ └── tp4_pp2.yaml ├── results_gold │ ├── tp2_pp2.json # Test result file │ └── tp4_pp2.json └── results_test
Note: We have included data and model files that you can use. For more details, consult the training configuration file of the respective test case. If you need to add your own test data or model files, please contact us.
-
Modify the yml configuration file in the workflow to enable online testing:
... # Functional Tests with Model and Type Matrix functional-tests-train: needs: - megatron-unit-tests - flagscale-unit-tests uses: ./.github/workflows/functional-tests.yml strategy: matrix: model: # Add the new model if applicable - {NEW_MODEL} - aquila - mixtral name: "train-${{ matrix.model }}" with: model: ${{ matrix.model }} type: train ...
-
Run Manually:
./tests/scripts/format_tests/test_format.sh
-
Run via Pre-commit:
pre-commit install
Code format checks will run automatically upon committing.
-
Online Format Check:
Find the corresponding action for your PR in the Format Check.