How to load a TorchScript model in C++.
Official Tutorial PyTorch - YouTube
The last step is building the application. For this, assume our example directory is laid out like this:
example-app/
CMakeLists.txt
example-app.cpp
We can now run the following commands to build the application from within the example-app/ folder:
mkdir build
cd build
cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch ..
cmake --build . --config Release
At build folder, run the following command:
./example-app
At build folder, run the following command:
make
./example-app
./example-app <path_to_model>/traced_resnet_model.pt
The "compiler" is a separate package that needs to be installed. One called g++ can be installed on it's own and is also included within a bundle of packages called "build-essential".
Thus sudo apt-get install build-essential
solves the problem (and sudo apt-get install g++
should also work), allowing cmake ..
to work with no configuration necessary.
The solution was discussed here: Where to find <torch/torch.h>?
Solution: Another CMakeLists.txt from here: Installing C++ Distributions of PyTorch.
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(example-app)
find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 17)
# The following code block is suggested to be used on Windows.
# According to https://github.com/pytorch/pytorch/issues/25457,
# the DLLs need to be copied to avoid memory errors.
if (MSVC)
file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll")
add_custom_command(TARGET example-app
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${TORCH_DLLS}
$<TARGET_FILE_DIR:example-app>)
endif (MSVC)