The nestedtensor package prototype
If you are here because you ran into a runtime error due to a missing feature or some kind of bug, please open an issue and fill in the appropiate template. If you have general feedback about this prototype you can use our suggested template or just open a free-form issue if you like. Thank you for contributing to this project!
If you are new to this project, we recommend you take a look at our whirlwind introduction to get started.
Please see the list of currently supported operators and open an issue if you find you need one for your project that's not listed.
The nestedtensor project is built on top of a torch fork for improved interoperability and also ships with torchvision binaries that were built against this fork. To use NestedTensors you need to install this version of torch, which is frequently rebased upon PyTorch's viable/strict branch (most recent master where all tests pass).
Version | Python | CUDA | Wheels |
---|---|---|---|
0.1.1 | 3.6 | CPU-only | torch, nestedtensor, torchvision |
0.1.1 | 3.7 | CPU-only | torch, nestedtensor, torchvision |
0.1.1 | 3.8 | CPU-only | torch, nestedtensor, torchvision |
In general we batch data for efficiency, but usually batched kernels need, or greatly benefit from, regular, statically-shaped data.
One way of dealing with dynamic shapes then, is via padding and masking. Various projects construct masks that, together with a data Tensor, are used as a representation for lists of dynamically shaped Tensors.
Obviously this is inefficient from a memory and compute perspective if the Tensors within this list are sufficiently diverse.
You can also trace through the codebase where these masks are used and observe the kind of code this approach often leads to. See for example universal_sentence_embedding.
Otherwise we also have one-off operator support in PyTorch that aims to support dynamic shapes via extra arguments such as a padding index. Of course, while these functions are fast and sometimes memory efficient, they don't provide a consistent interface.
Other users simply gave up and started writing for-loops, or discovered that batching didn't help.
We want to have a single abstraction that is consistent, fast, memory efficient and readable and the nestedtensor project aims to provide that.
NestedTensors are a generalization of torch Tensors which eases working with data of different shapes and lengths. In a nutshell, Tensors have scalar entries (e.g. floats) and NestedTensors have Tensor entries. However, note that a NestedTensor is still a Tensor. That means it needs to have a single dimension, single dtype, single device and single layout.
Tensor entry constraints:
- Each Tensor constituent is of the dtype, layout and device of the containing NestedTensor.
- The dimension of a constituent Tensor must be less than the dimension of the NestedTensor.
- An empty NestedTensor is of dimension zero.
The nestedtensor package is a prototype intended for early stage feedback and testing. It is on the road to a beta classification, but there is no definitive timeline yet. See PyTorch feature classification for what prototype, beta and stale means.
It is developed against a fork of PyTorch to enable cutting-edge features such as improved performance or better torch.vmap
integration.
Developers will thus need to build from source, but users can use the binary we will start shipping soon (see the related issue).
If you want to use the binaries you need to run on Linux, use Python 3.8+ and have a CUDA-11 toolkit installed.
If you want to build from source you can probably get it to work on many platforms, but supporting other platforms won't take priority over Linux. We're happy to review community contributions that achieve this however.
- pytorch (installed from nestedtensor/third_party/pytorch submodule)
- torchvision (needed for examples and tests)
- ipython (needed for examples)
- notebook (needed for examples)
Get the source
git clone --recursive https://github.com/pytorch/nestedtensor
cd nestedtensor
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive
Install the build tools
conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests
conda install -c pytorch magma-cuda110
Build from scratch
./clean_build_with_submodule.sh
Incremental builds
./build_with_submodule.sh
The project is under active development. If you have a suggestions or found a bug, please file an issue!