From 7d0be8e19d4a8d0a57714147d995486c941197ff Mon Sep 17 00:00:00 2001 From: Evan Li Date: Fri, 2 May 2025 05:44:57 -0700 Subject: [PATCH] update doc --- docsrc/RELEASE_CHECKLIST.md | 3 +- docsrc/getting_started/installation.rst | 39 ++++--------------------- docsrc/user_guide/runtime.rst | 2 -- 3 files changed, 6 insertions(+), 38 deletions(-) diff --git a/docsrc/RELEASE_CHECKLIST.md b/docsrc/RELEASE_CHECKLIST.md index 0900d0b2bd..aae506164f 100644 --- a/docsrc/RELEASE_CHECKLIST.md +++ b/docsrc/RELEASE_CHECKLIST.md @@ -63,9 +63,8 @@ will result in a minor version bump and significant bug fixes will result in a p - Paste in Milestone information and Changelog information into release notes - Generate libtorchtrt.tar.gz for the following platforms: - x86_64 cxx11-abi - - x86_64 pre-cxx11-abi - TODO: Add cxx11-abi build for aarch64 when a manylinux container for aarch64 exists - - Generate Python packages for Python 3.6/3.7/3.8/3.9 for x86_64 + - Generate Python packages for supported Python versions for x86_64 - TODO: Build a manylinux container for aarch64 - `docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh` generates all wheels - To build container `docker build -t build_torch_tensorrt_wheel .` diff --git a/docsrc/getting_started/installation.rst b/docsrc/getting_started/installation.rst index ba061191ae..700d7ee74d 100644 --- a/docsrc/getting_started/installation.rst +++ b/docsrc/getting_started/installation.rst @@ -203,37 +203,20 @@ To build with debug symbols use the following command A tarball with the include files and library can then be found in ``bazel-bin`` -Pre CXX11 ABI Build -............................ - -To build using the pre-CXX11 ABI use the ``pre_cxx11_abi`` config - -.. code-block:: shell - - bazel build //:libtorchtrt --config pre_cxx11_abi -c [dbg/opt] - -A tarball with the include files and library can then be found in ``bazel-bin`` - - .. _abis: Choosing the Right ABI ^^^^^^^^^^^^^^^^^^^^^^^^ -Likely the most complicated thing about compiling Torch-TensorRT is selecting the correct ABI. There are two options -which are incompatible with each other, pre-cxx11-abi and the cxx11-abi. The complexity comes from the fact that while -the most popular distribution of PyTorch (wheels downloaded from pytorch.org/pypi directly) use the pre-cxx11-abi, most -other distributions you might encounter (e.g. ones from NVIDIA - NGC containers, and builds for Jetson as well as certain -libtorch builds and likely if you build PyTorch from source) use the cxx11-abi. It is important you compile Torch-TensorRT -using the correct ABI to function properly. Below is a table with general pairings of PyTorch distribution sources and the -recommended commands: +For the old versions, there were two ABI options to compile Torch-TensorRT which were incompatible with each other, +pre-cxx11-abi and cxx11-abi. The complexity came from the different distributions of PyTorch. Fortunately, PyTorch +has switched to cxx11-abi for all distributions. Below is a table with general pairings of PyTorch distribution +sources and the recommended commands: +-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+ | PyTorch Source | Recommended Python Compilation Command | Recommended C++ Compilation Command | +=============================================================+==========================================================+====================================================================+ -| PyTorch whl file from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt \-\-config pre_cxx11_abi | -+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+ -| libtorch-shared-with-deps-*.zip from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt \-\-config pre_cxx11_abi | +| PyTorch whl file from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt | +-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+ | libtorch-cxx11-abi-shared-with-deps-*.zip from PyTorch.org | python setup.py bdist_wheel | bazel build //:libtorchtrt -c opt | +-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+ @@ -339,10 +322,6 @@ To build natively on aarch64-linux-gnu platform, configure the ``WORKSPACE`` wit In the case that you installed with ``sudo pip install`` this will be ``/usr/local/lib/python3.8/dist-packages/torch``. In the case you installed with ``pip install --user`` this will be ``$HOME/.local/lib/python3.8/site-packages/torch``. -In the case you are using NVIDIA compiled pip packages, set the path for both libtorch sources to the same path. This is because unlike -PyTorch on x86_64, NVIDIA aarch64 PyTorch uses the CXX11-ABI. If you compiled for source using the pre_cxx11_abi and only would like to -use that library, set the paths to the same path but when you compile make sure to add the flag ``--config=pre_cxx11_abi`` - .. code-block:: shell new_local_repository( @@ -351,12 +330,6 @@ use that library, set the paths to the same path but when you compile make sure build_file = "third_party/libtorch/BUILD" ) - new_local_repository( - name = "libtorch_pre_cxx11_abi", - path = "/usr/local/lib/python3.8/dist-packages/torch", - build_file = "third_party/libtorch/BUILD" - ) - Compile C++ Library and Compiler CLI ........................................................ @@ -385,6 +358,4 @@ Compile the Python API using the following command from the ``//py`` directory: python3 setup.py install -If you have a build of PyTorch that uses Pre-CXX11 ABI drop the ``--use-pre-cxx11-abi`` flag - If you are building for Jetpack 4.5 add the ``--jetpack-version 5.0`` flag diff --git a/docsrc/user_guide/runtime.rst b/docsrc/user_guide/runtime.rst index 5ca842514e..fc73aef8ac 100644 --- a/docsrc/user_guide/runtime.rst +++ b/docsrc/user_guide/runtime.rst @@ -22,8 +22,6 @@ link ``libtorchtrt_runtime.so`` in your deployment programs or use ``DL_OPEN`` o you can load the runtime with ``torch.ops.load_library("libtorchtrt_runtime.so")``. You can then continue to use programs just as you would otherwise via PyTorch API. -.. note:: If you are using the standard distribution of PyTorch in Python on x86, likely you will need the pre-cxx11-abi variant of ``libtorchtrt_runtime.so``, check :ref:`Installation` documentation for more details. - .. note:: If you are linking ``libtorchtrt_runtime.so``, likely using the following flags will help ``-Wl,--no-as-needed -ltorchtrt -Wl,--as-needed`` as there's no direct symbol dependency to anything in the Torch-TensorRT runtime for most Torch-TensorRT runtime applications An example of how to use ``libtorchtrt_runtime.so`` can be found here: https://github.com/pytorch/TensorRT/tree/master/examples/torchtrt_runtime_example