From 586e0148080896970feb2dcb7483fdf29136a57e Mon Sep 17 00:00:00 2001 From: Yuxin Wu Date: Mon, 25 May 2020 15:37:08 -0700 Subject: [PATCH] Add pre-built packages with combination of (cuda, torch) versions Summary: Pull Request resolved: https://github.com/fairinternal/detectron2/pull/410 Test Plan: github {F238145512} readthedocs: {F238145513} Reviewed By: rbgirshick Differential Revision: D21710420 Pulled By: ppwwyyxx fbshipit-source-id: da055598e77d1ec095c685666c6a02f73f0fe403 --- GETTING_STARTED.md | 14 +++---- INSTALL.md | 65 ++++++++++++++++++------------ detectron2/utils/collect_env.py | 52 +++++++++++++++--------- dev/packaging/build_all_wheels.sh | 36 +++++++++-------- dev/packaging/build_wheel.sh | 15 +++---- dev/packaging/gen_install_table.py | 42 +++++++++++++++++++ dev/packaging/gen_wheel_index.sh | 26 ++++++++++-- docker/Dockerfile-circleci | 1 - docs/_static/custom.css | 19 +++++++++ docs/conf.py | 1 + docs/tutorials/datasets.md | 2 +- 11 files changed, 189 insertions(+), 84 deletions(-) create mode 100755 dev/packaging/gen_install_table.py create mode 100644 docs/_static/custom.css diff --git a/GETTING_STARTED.md b/GETTING_STARTED.md index acaf13f02c..462a8c5fb7 100644 --- a/GETTING_STARTED.md +++ b/GETTING_STARTED.md @@ -13,8 +13,8 @@ For more advanced tutorials, refer to our [documentation](https://detectron2.rea ### Inference Demo with Pre-trained Models 1. Pick a model and its config file from - [model zoo](MODEL_ZOO.md), - for example, `mask_rcnn_R_50_FPN_3x.yaml`. + [model zoo](MODEL_ZOO.md), + for example, `mask_rcnn_R_50_FPN_3x.yaml`. 2. We provide `demo.py` that is able to run builtin standard models. Run it with: ``` cd demo/ @@ -47,15 +47,15 @@ then run: ``` cd tools/ ./train_net.py --num-gpus 8 \ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml + --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml ``` The configs are made for 8-GPU training. To train on 1 GPU, you may need to [change some parameters](https://arxiv.org/abs/1706.02677), e.g.: ``` ./train_net.py \ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \ - --num-gpus 1 SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025 + --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \ + --num-gpus 1 SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025 ``` For most models, CPU training is not supported. @@ -63,8 +63,8 @@ For most models, CPU training is not supported. To evaluate a model's performance, use ``` ./train_net.py \ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \ - --eval-only MODEL.WEIGHTS /path/to/checkpoint_file + --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \ + --eval-only MODEL.WEIGHTS /path/to/checkpoint_file ``` For more options, see `./train_net.py -h`. diff --git a/INSTALL.md b/INSTALL.md index a4b4d406e9..c4a1ef9f6d 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -9,7 +9,7 @@ also installs detectron2 with a few simple commands. - Linux or macOS with Python ≥ 3.6 - PyTorch ≥ 1.4 - [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. - You can install them together at [pytorch.org](https://pytorch.org) to make sure of this. + You can install them together at [pytorch.org](https://pytorch.org) to make sure of this. - OpenCV, optional, needed by demo and visualization - pycocotools: `pip install cython; pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'` @@ -34,20 +34,34 @@ To __rebuild__ detectron2 that's built from a local clone, use `rm -rf build/ ** old build first. You often need to rebuild detectron2 after reinstalling PyTorch. ### Install Pre-Built Detectron2 (Linux only) -``` -# for CUDA 10.2: -python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/index.html -``` -For other cuda versions, replace cu102 with "cu{101,92}" or "cpu". + +Choose from this table: + +
CUDA torch 1.5torch 1.4
10.2
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.5/index.html
+
10.1
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.5/index.html
+
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.4/index.html
+
10.0
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu100/torch1.4/index.html
+
9.2
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu92/torch1.5/index.html
+
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu92/torch1.4/index.html
+
cpu
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.5/index.html
+
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.4/index.html
+
+ Note that: -1. Such installation has to be used with certain version of official PyTorch release. - See [releases](https://github.com/facebookresearch/detectron2/releases) for requirements. +1. The pre-built package has to be used with corresponding version of CUDA and official PyTorch release. It will not work with a different version of PyTorch or a non-official build of PyTorch. - The CUDA version used by PyTorch and detectron2 has to match as well. 2. Such installation is out-of-date w.r.t. master branch of detectron2. It may not be - compatible with the master branch of a research project that uses detectron2 (e.g. those in - [projects](projects) or [meshrcnn](https://github.com/facebookresearch/meshrcnn/)). + compatible with the master branch of a research project that uses detectron2 (e.g. those in + [projects](projects) or [meshrcnn](https://github.com/facebookresearch/meshrcnn/)). ### Common Installation Issues @@ -119,25 +133,25 @@ Two possibilities: To check whether it is the case, use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions. - In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA" - to contain cuda libraries of the same version. + In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA" + to contain cuda libraries of the same version. - When they are inconsistent, - you need to either install a different build of PyTorch (or build by yourself) - to match your local CUDA installation, or install a different version of CUDA to match PyTorch. + When they are inconsistent, + you need to either install a different build of PyTorch (or build by yourself) + to match your local CUDA installation, or install a different version of CUDA to match PyTorch. * Detectron2 or PyTorch/torchvision is not built for the correct GPU architecture (compute compatibility). - The GPU architecture for PyTorch/detectron2/torchvision is available in the "architecture flags" in - `python -m detectron2.utils.collect_env`. + The GPU architecture for PyTorch/detectron2/torchvision is available in the "architecture flags" in + `python -m detectron2.utils.collect_env`. - The GPU architecture flags of detectron2/torchvision by default matches the GPU model detected - during compilation. This means the compiled code may not work on a different GPU model. - To overwrite the GPU architecture for detectron2/torchvision, use `TORCH_CUDA_ARCH_LIST` environment variable during compilation. + The GPU architecture flags of detectron2/torchvision by default matches the GPU model detected + during compilation. This means the compiled code may not work on a different GPU model. + To overwrite the GPU architecture for detectron2/torchvision, use `TORCH_CUDA_ARCH_LIST` environment variable during compilation. - For example, `export TORCH_CUDA_ARCH_LIST=6.0,7.0` makes it compile for both P100s and V100s. - Visit [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus) to find out - the correct compute compatibility number for your device. + For example, `export TORCH_CUDA_ARCH_LIST=6.0,7.0` makes it compile for both P100s and V100s. + Visit [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus) to find out + the correct compute compatibility number for your device. @@ -165,10 +179,11 @@ to match your local CUDA installation, or install a different version of CUDA to C++ compilation errors from NVCC
+ 1. NVCC version has to match the CUDA version of your PyTorch. 2. NVCC has compatibility issues with certain versions of gcc. You may need a different - version of gcc. The version used by PyTorch can be found by `print(torch.__config__.show())`. + version of gcc. The version used by PyTorch can be found by `print(torch.__config__.show())`. diff --git a/detectron2/utils/collect_env.py b/detectron2/utils/collect_env.py index c25b99cb0a..260c485e21 100644 --- a/detectron2/utils/collect_env.py +++ b/detectron2/utils/collect_env.py @@ -71,18 +71,39 @@ def collect_env_info(): ) except ImportError: data.append(("detectron2", "failed to import")) + + try: + from detectron2 import _C + except ImportError: + data.append(("detectron2._C", "failed to import")) + + # print system compilers when extension fails to build + if sys.platform != "win32": # don't know what to do for windows + try: + # this is how torch/utils/cpp_extensions.py choose compiler + cxx = os.environ.get("CXX", "c++") + cxx = subprocess.check_output("'{}' --version".format(cxx), shell=True) + cxx = cxx.decode("utf-8").strip().split("\n")[0] + except subprocess.SubprocessError: + cxx = "Not found" + data.append(("Compiler", cxx)) + + if has_cuda and CUDA_HOME is not None: + try: + nvcc = os.path.join(CUDA_HOME, "bin", "nvcc") + nvcc = subprocess.check_output("'{}' -V".format(nvcc), shell=True) + nvcc = nvcc.decode("utf-8").strip().split("\n")[-1] + except subprocess.SubprocessError: + nvcc = "Not found" + data.append(("CUDA compiler", nvcc)) else: - try: - from detectron2 import _C - except ImportError: - data.append(("detectron2._C", "failed to import")) - else: - data.append(("detectron2 compiler", _C.get_compiler_version())) - data.append(("detectron2 CUDA compiler", _C.get_cuda_version())) - if has_cuda: - data.append( - ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, _C.__file__)) - ) + # print compilers that are used to build extension + data.append(("Compiler", _C.get_compiler_version())) + data.append(("CUDA compiler", _C.get_cuda_version())) + if has_cuda: + data.append( + ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, _C.__file__)) + ) data.append(get_env_module()) data.append(("PyTorch", torch.__version__ + " @" + os.path.dirname(torch.__file__))) @@ -100,15 +121,6 @@ def collect_env_info(): data.append(("CUDA_HOME", str(CUDA_HOME))) - if CUDA_HOME is not None and os.path.isdir(CUDA_HOME): - try: - nvcc = os.path.join(CUDA_HOME, "bin", "nvcc") - nvcc = subprocess.check_output("'{}' -V | tail -n1".format(nvcc), shell=True) - nvcc = nvcc.decode("utf-8").strip() - except subprocess.SubprocessError: - nvcc = "Not Available" - data.append(("NVCC", nvcc)) - cuda_arch_list = os.environ.get("TORCH_CUDA_ARCH_LIST", None) if cuda_arch_list: data.append(("TORCH_CUDA_ARCH_LIST", cuda_arch_list)) diff --git a/dev/packaging/build_all_wheels.sh b/dev/packaging/build_all_wheels.sh index eb64dea70c..a1b0a3a171 100755 --- a/dev/packaging/build_all_wheels.sh +++ b/dev/packaging/build_all_wheels.sh @@ -1,10 +1,14 @@ #!/bin/bash -e # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -PYTORCH_VERSION=1.5 +[[ -d "dev/packaging" ]] || { + echo "Please run this script at detectron2 root!" + exit 1 +} -build_for_one_cuda() { +build_one() { cu=$1 + pytorch_ver=$2 case "$cu" in cu*) @@ -29,29 +33,27 @@ build_for_one_cuda() { cat < install
python -m pip install detectron2 -f \\
+  https://dl.fbaipublicfiles.com/detectron2/wheels/{cuda}/torch{torch}/index.html
+
""" +CUDA_SUFFIX = {"10.2": "cu102", "10.1": "cu101", "10.0": "cu100", "9.2": "cu92", "cpu": "cpu"} + + +def gen_header(torch_versions): + return '' + "".join( + [ + ''.format(t) + for t in torch_versions + ] + ) + + +if __name__ == "__main__": + all_versions = [("1.4", k) for k in ["10.1", "10.0", "9.2", "cpu"]] + [ + ("1.5", k) for k in ["10.2", "10.1", "9.2", "cpu"] + ] + + torch_versions = sorted({k[0] for k in all_versions}, key=float, reverse=True) + cuda_versions = sorted( + {k[1] for k in all_versions}, key=lambda x: float(x) if x != "cpu" else 0, reverse=True + ) + + table = gen_header(torch_versions) + for cu in cuda_versions: + table += f""" """ + cu_suffix = CUDA_SUFFIX[cu] + for torch in torch_versions: + if (torch, cu) in all_versions: + cell = template.format(cuda=cu_suffix, torch=torch) + else: + cell = "" + table += f""" """ + table += "" + table += "
CUDA torch {}
{cu}{cell}
" + print(table) diff --git a/dev/packaging/gen_wheel_index.sh b/dev/packaging/gen_wheel_index.sh index 44d6041cdf..1313ab3b67 100755 --- a/dev/packaging/gen_wheel_index.sh +++ b/dev/packaging/gen_wheel_index.sh @@ -8,20 +8,38 @@ if [[ -z "$root" ]]; then exit fi +export LC_ALL=C # reproducible sort +# NOTE: all sort in this script might not work when xx.10 is released + index=$root/index.html cd "$root" for cu in cpu cu92 cu100 cu101 cu102; do - cd $cu + cd "$root/$cu" echo "Creating $PWD/index.html ..." - for whl in *.whl; do + # First sort by torch version, then stable sort by d2 version with unique. + # As a result, the latest torch version for each d2 version is kept. + for whl in $(find -type f -name '*.whl' -printf '%P\n' \ + | sort -k 1 -r | sort -t '/' -k 2 --stable -r --unique); do echo "$whl
" done > index.html - cd "$root" + + + for torch in torch*; do + cd "$root/$cu/$torch" + + # list all whl for each cuda,torch version + echo "Creating $PWD/index.html ..." + for whl in $(find . -type f -name '*.whl' -printf '%P\n' | sort -r); do + echo "$whl
" + done > index.html + done done +cd "$root" +# Just list everything: echo "Creating $index ..." -for whl in $(find . -type f -name '*.whl' -printf '%P\n' | sort); do +for whl in $(find . -type f -name '*.whl' -printf '%P\n' | sort -r); do echo "$whl
" done > "$index" diff --git a/docker/Dockerfile-circleci b/docker/Dockerfile-circleci index bc0be845ad..e0416329b6 100644 --- a/docker/Dockerfile-circleci +++ b/docker/Dockerfile-circleci @@ -11,7 +11,6 @@ RUN wget -q https://bootstrap.pypa.io/get-pip.py && \ rm get-pip.py # install dependencies -# See https://pytorch.org/ for other options if you use a different version of CUDA RUN pip install tensorboard cython RUN pip install torch==1.5+cu101 torchvision==0.6+cu101 -f https://download.pytorch.org/whl/torch_stable.html RUN pip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' diff --git a/docs/_static/custom.css b/docs/_static/custom.css new file mode 100644 index 0000000000..d9a375b0d2 --- /dev/null +++ b/docs/_static/custom.css @@ -0,0 +1,19 @@ +/* + * some extra css to make markdown look similar between github/sphinx + */ + +/* + * Below is for install.md: + */ +.rst-content code { + white-space: pre; + border: 0px; +} + +th { + border: 1px solid #e1e4e5; +} + +div.section > details { + padding-bottom: 1em; +} diff --git a/docs/conf.py b/docs/conf.py index 44e9f2b4db..78700fc51d 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -184,6 +184,7 @@ def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ["_static"] +html_css_files = ["css/custom.css"] # Custom sidebar templates, must be a dictionary that maps document names # to template names. diff --git a/docs/tutorials/datasets.md b/docs/tutorials/datasets.md index 8dc1c0c555..1d44d208b9 100644 --- a/docs/tutorials/datasets.md +++ b/docs/tutorials/datasets.md @@ -64,7 +64,7 @@ and the required fields vary based on what the dataloader or the task needs (see It must be a member of [structures.BoxMode](../modules/structures.html#detectron2.structures.BoxMode). Currently supports: `BoxMode.XYXY_ABS`, `BoxMode.XYWH_ABS`. - + `category_id` (int): an integer in the range [0, num_categories) representing the category label. + + `category_id` (int): an integer in the range [0, num_categories-1] representing the category label. The value num_categories is reserved to represent the "background" category, if applicable. + `segmentation` (list[list[float]] or dict): the segmentation mask of the instance. + If `list[list[float]]`, it represents a list of polygons, one for each connected component