-
New HugeCTR to ONNX Converter: We’re introducing a new HugeCTR to ONNX converter in the form of a Python package. All graph configuration files are required and model weights must be formatted as inputs. You can specify where you want to save the converted ONNX model. You can also convert sparse embedding models. For more information, refer to HugeCTR to ONNX Converter and HugeCTR2ONNX Demo Notebook.
-
New Hierarchical Storage Mechanicsm on the Parameter Server (POC): We’ve implemented a hierarchical storage mechanism between local SSDs and CPU memory. As a result, embedding tables no longer have to be stored in the local CPU memory. The distributed Redis cluster is being implemented as a CPU cache to store larger embedding tables and interact with the GPU embedding cache directly. The local RocksDB serves as a query engine to back up the complete embedding table on the local SSDs and assist the Redis cluster with looking up missing embedding keys. Please find more information here
-
Parquet Format Support Within the Data Generator: The HugeCTR data generator now supports the parquet format, which can be configured easily using the Python API. For more information, refer to Data Generator API.
-
Python Interface Support for the Data Generator: The data generator has been enabled within the HugeCTR Python interface. The parameters associated with the data generator have been encapsulated into the
DataGeneratorParams
struct, which is required to initialize theDataGenerator
instance. You can use the data generator's Python APIs to easily generate the Norm, Parquet, or Raw dataset formats with the desired distribution of sparse keys. For more information, refer to Data Generator API and Data Generator Samples. -
Improvements to the Formula of the Power Law Simulator within the Data Generator: We've modified the formula of the power law simulator within the data generator so that a positive alpha value is always produced, which will be needed for most use cases. The alpha values for
Long
,Medium
, andShort
within the power law distribution are 0.9, 1.1, and 1.3 respectively. For more information, refer to Data Generator API. -
Support for Arbitrary Input and Output Tensors in the Concat and Slice Layers: The Concat and Slice layers now support any number of input and output tensors. Previously, these layers were limited to a maximum of four tensors.
-
New Continuous Training Notebook: We’ve added a new notebook to demonstrate how to perform continuous training using the model oversubscription (also referred to as Embedding Training Cache) feature. For more information, refer to HugeCTR Continuous Training.
-
New HugeCTR Contributor Guide: We've added a new HugeCTR Contributor Guide that explains how to contribute to HugeCTR, which may involve reporting and fixing a bug, introducing a new feature, or implementing a new or pending feature.
-
Enhancements to Sparse Operation Kits (SOK): SOK now supports TensorFlow 2.5 and 2.6. We also added support for identity hashing, dynamic input, and Horovod within SOK. Lastly, we added a new SOK docs set to help you get started with SOK.
-
Supporting Arbitrary Number of Inputs in Concat Layer and Slice Layer: The Concat and Slice layers now support any number of input and output tensors, respectively. Previously, these layers would be limited to a maximum of 4 tensors.
-
Fix power law in Data Generator (Generalize the power law simulator in Data Generator): We’ve modified the formula of the power law simulator to make for the positive alpha value, which is more general in different use cases. Besides, the alpha values for
Long
,Medium
andShort
of power law distribution are 0.9, 1.1 and 1.3 respectively. For more information, see Data Generator API.
-
MLPerf v1.0 Integration: We've integrated MLPerf optimizations for DLRM training and enabled them as configurable options in Python interface. Specifically, we have incorporated AsyncRaw data reader, HybridEmbedding, FusedReluBiasFullyConnectedLayer, overlapped pipeline, holistic CUDA Graph and so on. The performance of 14-node DGX-A100 DLRM training with Python APIs is comparable to CLI usage. For more information, refer to HugeCTR Python Interface and DLRM Sample.
-
Enhancements to the Python Interface: We’ve enhanced the Python interface for HugeCTR so that you no longer have to manually create a JSON configuration file. Our Python APIs can now be used to create the computation graph. They can also be used to dump the model graph as a JSON object and save the model weights as binary files so that continuous training and inference can take place. We've added an Inference API that takes Norm or Parquet datasets as input to facilitate the inference process. For more information, refer to HugeCTR Python Interface and HugeCTR Criteo Notebook.
-
New Interface for Unified Embedding: We’re introducing a new interface to simplify the use of embeddings and datareaders. To help you specify the number of keys in each slot, we added
nnz_per_slot
andis_fixed_length
. You can now directly configure how much memory usage you need by specifyingworkspace_size_per_gpu_in_mb
instead ofmax_vocabulary_size_per_gpu
. For convenience,mean/sum
is used in combinators instead of 0 and 1. In cases where you don't know which embedding type you should use, you can specifyuse_hash_table
and let HugeCTR automatically select the embedding type based on your configuration. For more information, refer to HugeCTR Python Interface. -
Multi-Node Support for Embedding Training Cache (MOS): We’ve enabled multi-node support for the embedding training cache. You can now train a model with a terabyte-size embedding table using one node or multiple nodes even if the entire embedding table can't fit into the GPU memory. We're also introducing the host memory (HMEM) based parameter server (PS) along with its SSD-based counterpart. If the sparse model can fit into the host memory of each training node, the optimized HMEM-based PS can provide better model loading and dumping performance with a more effective bandwidth. For more information, refer to HugeCTR Python Interface.
-
Enhancements to the Multi-Nodes TensorFlow Plugin: The Multi-Nodes TensorFlow Plugin now supports multi-node synchronized training via tf.distribute.MultiWorkerMirroredStrategy. With minimal code changes, you can now easily scale your single GPU training to multi-node multi GPU training. The Multi-Nodes TensorFlow Plugin also supports multi-node synchronized training via Horovod. The inputs for embedding plugins are now data parallel, so the datareader no longer needs to preprocess data for different GPUs based on concrete embedding algorithms. For more information, see our Sparse Operation Kit Demo.
-
NCF Model Support: We've added support for the NCF model, as well as the GMF and NeuMF variant models. With this enhancement, we're introducing a new element-wise multiplication layer and HitRate evaluation metric. Sample code was added that demonstrates how to preprocess user-item interaction data and train a NCF model with it. New examples have also been added that demonstrate how to train NCF models using MovieLens datasets.
-
DIN and DIEN Model Support: All of our layers support the DIN model. The following layers support the DIEN model: FusedReshapeConcat, FusedReshapeConcatGeneral, Gather, GRU, PReLUDice, ReduceMean, Scale, Softmax, and Sub. We also added sample code to demonstrate how to use the Amazon dataset to train the DIN model. See our DIN sample.
-
Multi-Hot Support for Parquet Datasets: We've added multi-hot support for parquet datasets, so you can now train models with a paraquet dataset that contains both one hot and multi-hot slots.
-
Mixed Precision (FP16) Support in More Layers: The MultiCross layer now supports mixed precision (FP16). All layers now support FP16.
-
Mixed Precision (FP16) Support in Inference: We've added FP16 support for the inference pipeline. Therefore, dense layers can now adopt FP16 during inference.
-
Optimizer State Enhancements for Continuous Training: You can now store optimizer states that are updated during continuous training as files, such as the Adam optimizer's first moment (m) and second moment (v). By default, the optimizer states are initialized with zeros, but you can specify a set of optimizer state files to recover their previous values. For more information about
dense_opt_states_file
andsparse_opt_states_file
, refer to Python Interface. -
New Library File for GPU Embedding Cache Data: We’ve moved the header/source code of the GPU embedding cache data structure into a stand-alone folder. It has been compiled into a stand-alone library file. Similar to HugeCTR, your application programs can now be directly linked from this new library file for future use. For more information, refer to our GPU Embedding Cache ReadMe.
-
Embedding Plugin Enhancements: We’ve moved all the embedding plugin files into a stand-alone folder. The embedding plugin can be used as a stand-alone python module, and works with TensorFlow to accelerate the embedding training process.
-
Adagrad Support: Adagrad can now be used to optimize your embedding and network. To use it, change the optimizer type in the Optimizer layer and set the corresponding parameters.
-
New DLRM Inference Benchmark: We've added two detailed Jupyter notebooks to demonstrate how to train, deploy, and benchmark the performance of a deep learning recommendation model (DLRM) with HugeCTR. For more information, refer to our HugeCTR Inference Notebooks.
-
FP16 Optimization: We've optimized the DotProduct, ELU, and Sigmoid layers based on
__half2
vectorized loads and stores, improving their device memory bandwidth utilization. MultiCross, FmOrder2, ReduceSum, and Multiply are the only layers that still need to be optimized for FP16. -
Synthetic Data Generator Enhancement: We've enhanced our synthetic data generator so that it can generate uniformly distributed datasets, as well as power-law based datasets. You can now specify the
vocabulary_size
andmax_nnz
per categorical feature instead of across all categorial features. For more information, refer to our user guide. -
Reduced Memory Allocation for Trained Model Exportation: To prevent the "Out of Memory" error message from displaying when exporting a trained model, which may include a very large embedding table, the amount of memory allocated by the related functions has been significantly reduced.
-
Dropout Layer Enhancement: The Dropout layer is now compatible with CUDA Graph. The Dropout layer is using cuDNN by default so that it can be used with CUDA Graph.
-
Inference Support: To streamline the recommender system workflow, we’ve implemented a custom HugeCTR backend on the NVIDIA Triton Inference Server. The HugeCTR backend leverages the embedding cache and parameter server to efficiently manage embeddings of different sizes and models in a hierarchical manner. For more information, refer to our inference repository.
-
New High-Level API: You can now also construct and train your models using the Python interface with our new high-level API. For more information, refer to our preview example code to grasp how this new API works.
-
FP16 Support in More Layers: All the layers except
MultiCross
support mixed precision mode. We’ve also optimized some of the FP16 layer implementations based on vectorized loads and stores. -
Enhanced TensorFlow Embedding Plugin: Our embedding plugin now supports
LocalizedSlotSparseEmbeddingHash
mode. With this enhancement, the DNN model no longer needs to be split into two parts since it now connects with the embedding op throughMirroredStrategy
within the embedding layer. -
Extended Model Oversubscription: We’ve extended the model oversubscription feature to support
LocalizedSlotSparseEmbeddingHash
andLocalizedSlotSparseEmbeddingHashOneHot
. -
Epoch-Based Training Enhancement: The
num_epochs
option in the Solver clause can now be used with theRaw
dataset format. -
Deprecation of the
eval_batches
Parameter: Theeval_batches
parameter has been deprecated and replaced with themax_eval_batches
andmax_eval_samples
parameters. In epoch mode, these parameters control the maximum number of evaluations. An error message will appear when attempting to use theeval_batches
parameter. -
MultiplyLayer
Renamed: To clarify what theMultiplyLayer
does, it was renamed toWeightMultiplyLayer
. -
Optimized Initialization Time: HugeCTR’s initialization time, which includes the GEMM algorithm search and parameter initialization, was significantly reduced.
-
Sample Enhancements: Our samples now rely upon the Criteo 1TB Click Logs dataset instead of the Kaggle Display Advertising Challenge dataset. Our preprocessing scripts (Perl, Pandas, and NVTabular) have also been unified and simplified.
-
Configurable DataReader Worker: You can now specify the number of data reader workers, which run in parallel, with the
num_workers
parameter. Its default value is 12. However, if you are using the Parquet data reader, you can't configure thenum_workers
parameter since it always corresponds to the number of active GPUs.
-
New Python Interface: To enhance the interoperability with NVTabular and other Python-based libraries, we're introducing a new Python interface for HugeCTR.
-
HugeCTR Embedding with Tensorflow: To help users easily integrate HugeCTR’s optimized embedding into their Tensorflow workflow, we now offer the HugeCTR embedding layer as a Tensorflow plugin. To better understand how to intall, use, and verify it, see our Jupyter notebook tutorial. It also demonstrates how you can create a new Keras layer,
EmbeddingLayer
, based on thehugectr.py
helper code that we provide. -
Model Oversubscription: To enable a model with large embedding tables that exceeds the single GPU's memory limit, we've added a new model oversubscription feature, giving you the ability to load a subset of an embedding table into the GPU in a coarse grained, on-demand manner during the training stage.
-
TF32 Support: We've added TensorFloat-32 (TF32), a new math mode and third-generation of Tensor Cores, support on Ampere. TF32 uses the same 10-bit mantissa as FP16 to ensure accuracy while providing the same range as FP32 by using an 8-bit exponent. Since TF32 is an internal data type that accelerates FP32 GEMM computations with tensor cores, you can simply turn it on with a newly added configuration option. For more information, refer to Solver.
-
Enhanced AUC Implementation: To enhance the performance of our AUC computation on multi-node environments, we've redesigned our AUC implementation to improve how the computational load gets distributed across nodes.
-
Epoch-Based Training: In addition to the
max_iter
parameter, you can now set thenum_epochs
parameter in the Solver clause within the configuration file. This mode can only currently be used withNorm
dataset formats and their corresponding file lists. All dataset formats will be supported in the future. -
New Multi-Node Training Tutorial: To better support multi-node training use cases, we've added a new a step-by-step tutorial.
-
Power Law Distribution Support with Data Generator: Because of the increased need for generating a random dataset whose categorical features follows the power-law distribution, we've revised our data generation tool to support this use case. For additional information, refer to the
--long-tail
description [here](../docs/hugectr_user_guide.md#Generating Synthetic Data and Benchmarks). -
Multi-GPU Preprocessing Script for Criteo Samples: Multiple GPUs can now be used when preparing the dataset for our samples. For more information, see how preprocess_nvt.py is used to preprocess the Criteo dataset for DCN, DeepFM, and W&D samples.
-
Since the automatic plan file generator isn't able to handle systems that contain one GPU, you must manually create a JSON plan file with the following parameters and rename it using the name listed in the HugeCTR configuration file:
{"type": "all2all", "num_gpus": 1, "main_gpu": 0, "num_steps": 1, "num_chunks": 1, "plan": [[0, 0]], and "chunks": [1]}
. -
If using a system that contains two GPUs with two NVLink connections, the auto plan file generator will print the following warning message:
RuntimeWarning: divide by zero encountered in true_divide
. This is an erroneous warning message and should be ignored. -
The current plan file generator doesn't support a system where the NVSwitch or a full peer-to-peer connection between all nodes is unavailable.
-
Users need to set an
export CUDA_DEVICE_ORDER=PCI_BUS_ID
environment variable to ensure that the CUDA runtime and driver have a consistent GPU numbering. -
LocalizedSlotSparseEmbeddingOneHot
only supports a single-node machine where all the GPUs are fully connected such as NVSwitch. -
HugeCTR version 3.0 crashes when running the DLRM sample on DGX2 due to a CUDA Graph issue. To run the sample on DGX2, disable the CUDA Graph by setting the
cuda_graph
parameter to false even if it degrades the performance a bit. This issue doesn't exist when using the DGX A100. -
The HugeCTR embedding TensorFlow plugin only works with single-node machines.
-
The HugeCTR embedding TensorFlow plugin assumes that the input keys are in
int64
and its output is infloat
. -
If the number of samples in a dataset is not divisible by the batch size when in epoch mode and using the
num_epochs
instead ofmax_iter
, a few remaining samples are truncated. If the training dataset is large enough, its impact can be negligible. If you want to minimize the wasted batches, try adjusting the number of data reader workers. For example, using a file list source, set thenum_workers
parameter to an advisor based on the number of data files in the file list. -
The MultiCross layer doesn't support mixed precision mode yet.