Skip to content

Commit

Permalink
Misc. improvements to documentation/build setup for first-time builds. (
Browse files Browse the repository at this point in the history
apache#7840)

- Makefile
  - Target "crttest" ignored OUTPUTDIR variable

- .gitignore
  - Added ignores for download test data/models.

- docs/README.txt
  - Missing quotes on sphinx dep, needs pinned autodocsumm version

- docs/contribute/pull_request.rst
  - Use "ci_lint" docker image
  - Updated C++ test instructions to refer to the from_source installation for gtest.
  - Updated python test instructions with synr package dependency

- docs/langref/relay_expr.rst
  - Updated reference for example usage of TempExpr. `src/relay/pass/alter_op_layout.cc`
    no longer exists, and `src/relay/transforms/alter_op_layout.cc` doesn't use TempExpr.
    Picked a different use case as example.

- tests/scripts/task_cpp_unittest.sh
  - Updated "make crttest" to run only if "USE_MICRO" is enabled.  While USE_MICRO is always enabled
    in the CI builds, task_cpp_unittest.sh is also recommended for use in
    docs/install/from_source.rst, which does not mandate USE_MICRO.

- docs/install/from_source.rst
  - Added -DMAKE_SHARED_LIBS=ON to the google test cmake config.  By default, only static libs are
    generated for gtest, while TVM's build preferentially selects the shared libs.

- tutorials/get_started/auto_tuning_with_python.py
  - Changed norm_img_data to avoid loop, improve readability

- tutorials/get_started/relay_quick_start.py
  - Previous version used different input data passed to the initial and deployed module, then
    asserts that the results should be the same.  Modified so that the same input data are passed in
    both cases.

Co-authored-by: Eric Lunderberg <[email protected]>
  • Loading branch information
Lunderberg and Lunderberg authored Apr 14, 2021
1 parent 1e9c1bf commit ced4cee
Show file tree
Hide file tree
Showing 11 changed files with 37 additions and 34 deletions.
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -233,3 +233,8 @@ conda/pkg
# nix files
.envrc
*.nix

# Downloaded models/datasets
.tvm_test_data
.dgl
.caffe2
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ cpptest:
@mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) cpptest

crttest:
@mkdir -p build && cd build && cmake .. && $(MAKE) crttest
@mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) crttest

# EMCC; Web related scripts
EMCC_FLAGS= -std=c++11\
Expand Down
3 changes: 2 additions & 1 deletion docs/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ TVM Documentations
This folder contains the source of TVM documents

- A hosted version of doc is at https://tvm.apache.org/docs
- pip install sphinx>=1.5.5 sphinx-gallery sphinx_rtd_theme matplotlib Image recommonmark "Pillow<7" autodocsumm tlcpack-sphinx-addon
- pip install "sphinx>=1.5.5" sphinx-gallery sphinx_rtd_theme matplotlib Image recommonmark "Pillow<7" "autodocsumm<0.2.0" tlcpack-sphinx-addon
- (Versions 0.2.0 to 0.2.2 of autodocsumm are incompatible with sphinx>=3.4, https://github.com/Chilipp/autodocsumm/pull/42 )
- Build tvm first in the root folder.
- Run the following command
```bash
Expand Down
23 changes: 8 additions & 15 deletions docs/contribute/pull_request.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,15 +41,15 @@ This is a quick guide to submit a pull request, please also refer to the detaile
# While the lint commands used should be identical to those run in CI, this command reproduces
# the CI lint procedure exactly (typically helpful for debugging lint script errors).
docker/bash.sh tlcpack/ci-lint ./tests/scripts/task_lint.sh
docker/bash.sh ci_lint ./tests/scripts/task_lint.sh
When the clang-format lint check fails, run git-clang-format as follows to automatically reformat
your code:

.. code:: bash
# Run clang-format check for all the files that changed since upstream/main
docker/bash.sh tlcpack/ci-lint ./tests/lint/git-clang-format.sh upstream/main
docker/bash.sh ci_lint ./tests/lint/git-clang-format.sh upstream/main
- Add test-cases to cover the new features or bugfix the patch introduces.
- Document the code you wrote, see more at :ref:`doc_guide`
Expand Down Expand Up @@ -88,35 +88,28 @@ Here is the protocol to update CI image:

Testing
-------
Even though we have hooks to run unit tests automatically for each pull request, It's always recommended to run unit tests
Even though we have hooks to run unit tests automatically for each pull request, it's always recommended to run unit tests
locally beforehand to reduce reviewers' burden and speedup review process.

Running the C++ tests requires installation of gtest, following the instructions in
:ref:`install-from-source-cpp-tests`

C++
^^^
.. code:: bash
# assume you are in tvm source root
TVM_ROOT=`pwd`
# you need to install google test first, gtest will be installed to $TVM_ROOT/lib
apt-get install -y libgtest-dev
CACHE_PREFIX=. make -f 3rdparty/dmlc-core/scripts/packages.mk gtest
mkdir build
cd build
GTEST_LIB=$TVM_ROOT/lib cmake -DUSE_LLVM=ON ..
make cpptest -j$(nproc)
for test in *_test; do
./$test
done
./tests/scripts/task_cpp_unittest.sh
Python
^^^^^^
Necessary dependencies:

.. code:: bash
pip install --user pytest Cython
pip install --user pytest Cython synr
If you want to run all tests:

Expand Down
4 changes: 3 additions & 1 deletion docs/install/from_source.rst
Original file line number Diff line number Diff line change
Expand Up @@ -260,6 +260,8 @@ Install Contrib Libraries
nnpack


.. _install-from-source-cpp-tests:

Enable C++ Tests
----------------
We use `Google Test <https://github.com/google/googletest>`_ to drive the C++
Expand All @@ -271,7 +273,7 @@ tests in TVM. The easiest way to install GTest is from source.
cd googletest
mkdir build
cd build
cmake ..
cmake -DMAKE_SHARED_LIBS=ON ..
make
sudo make install
Expand Down
8 changes: 4 additions & 4 deletions docs/langref/relay_expr.rst
Original file line number Diff line number Diff line change
Expand Up @@ -685,9 +685,9 @@ code but may be inserted in a pass. Any :code:`TempExpr` created in a pass
should ideally be eliminated before the pass is complete, as a
:code:`TempExpr` only stores internal state and has no semantics of its own.

For an example of :code:`TempExpr` being used in a pass,
see :code:`src/relay/pass/alter_op_layout.cc`, which uses :code:`TempExpr` nodes
to store information about operator layouts as the pass tries to rearrange operator
calls.
For an example of :code:`TempExpr` being used in a pass, see
:code:`src/relay/transforms/fold_scale_axis.cc`, which uses
:code:`TempExpr` nodes to store information about scaling parameters
as the pass tries to fold these into the weights of a convolution.

See :py:class:`~tvm.relay.expr.TempExpr` for its definition and documentation.
2 changes: 1 addition & 1 deletion python/tvm/tir/op.py
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ def any(*args, span=None):


def all(*args, span=None):
"""Create a new experssion of the intersection of all conditions in the
"""Create a new expression of the intersection of all conditions in the
arguments
Parameters
Expand Down
7 changes: 6 additions & 1 deletion tests/scripts/task_cpp_unittest.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,12 @@ export OMP_NUM_THREADS=1
rm -f build/*_test

make cpptest -j2
make crttest # NOTE: don't parallelize, due to issue with build deps.

# "make crttest" requires USE_MICRO to be enabled, which is not always the case.
if grep crttest build/Makefile > /dev/null; then
make crttest # NOTE: don't parallelize, due to issue with build deps.
fi

for test in build/*_test; do
./$test
done
Expand Down
5 changes: 2 additions & 3 deletions tutorials/frontend/deploy_object_detection_pytorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
from tvm import relay
from tvm import relay
from tvm.runtime.vm import VirtualMachine
from tvm.contrib.download import download
from tvm.contrib.download import download_testdata

import numpy as np
import cv2
Expand Down Expand Up @@ -96,11 +96,10 @@ def forward(self, inp):
######################################################################
# Download a test image and pre-process
# -------------------------------------
img_path = "test_street_small.jpg"
img_url = (
"https://raw.githubusercontent.com/dmlc/web-data/" "master/gluoncv/detection/street_small.jpg"
)
download(img_url, img_path)
img_path = download_testdata(img_url, "test_street_small.jpg", module="data")

img = cv2.imread(img_path).astype("float32")
img = cv2.resize(img, (in_size, in_size))
Expand Down
8 changes: 3 additions & 5 deletions tutorials/get_started/auto_tuning_with_python.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,11 +127,9 @@
img_data = np.transpose(img_data, (2, 0, 1))

# Normalize according to the ImageNet input specification
imagenet_mean = np.array([0.485, 0.456, 0.406])
imagenet_stddev = np.array([0.229, 0.224, 0.225])
norm_img_data = np.zeros(img_data.shape).astype("float32")
for i in range(img_data.shape[0]):
norm_img_data[i, :, :] = (img_data[i, :, :] / 255 - imagenet_mean[i]) / imagenet_stddev[i]
imagenet_mean = np.array([0.485, 0.456, 0.406]).reshape((3, 1, 1))
imagenet_stddev = np.array([0.229, 0.224, 0.225]).reshape((3, 1, 1))
norm_img_data = (img_data / 255 - imagenet_mean) / imagenet_stddev

# Add the batch dimension, as we are expecting 4-dimensional input: NCHW.
img_data = np.expand_dims(norm_img_data, axis=0)
Expand Down
4 changes: 2 additions & 2 deletions tutorials/get_started/relay_quick_start.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@

# load the module back.
loaded_lib = tvm.runtime.load_module(path_lib)
input_data = tvm.nd.array(np.random.uniform(size=data_shape).astype("float32"))
input_data = tvm.nd.array(data)

module = graph_executor.GraphModule(loaded_lib["default"](dev))
module.run(data=input_data)
Expand All @@ -151,4 +151,4 @@
print(out_deploy.flatten()[0:10])

# check whether the output from deployed module is consistent with original one
tvm.testing.assert_allclose(out_deploy, out, atol=1e-3)
tvm.testing.assert_allclose(out_deploy, out, atol=1e-5)

0 comments on commit ced4cee

Please sign in to comment.