English | 中文文档 | MacOS | Linux | Windows
Lite.AI.ToolKit 🚀🚀🌟: A lite C++
toolkit of awesome AI models which contains 70+ models now. It's a collection of personal interests. Such as RVM, YOLOX, YOLOP, YOLOR, YoloV5, DeepLabV3, ArcFace, etc. emmm😞 ... it's not perfect yet. For now, let's regard it as a large collection of application cases for inference engines. Lite.AI.ToolKit based on ONNXRuntime C++ by default. I do have plans to reimplement it with NCNN, MNN and TNN, some models are already supported. Currently, I mainly consider its ease of use. Developers who need higher performance can make new optimizations based on the C++
implementation and ONNX
files provided by this repo~ Welcome to open a new PR
~ 👏👋, if you want to add a new model to this repo.
Core Features 🚀🚀🌟
- ❤️ Simply and User friendly. Simply and Consistent syntax like lite::cv::Type::Class, see examples.
- ⚡ Minimum Dependencies. Only OpenCV and ONNXRuntime are required by default, see build.
- ❤️ Lots of Algorithm Modules. Contains 10+ modules and 70+ famous models with 300+ frozen pretrained .onnx/.mnn/.param&bin(ncnn)/.tnnmodel&tnnproto files now, such as object detection, face detection, face recognition, segmentation, matting, etc. See Model Zoo and lite.ai.toolkit.hub.onnx.md.
❤️ Star 🌟👆🏻 this repo if it does any helps to you ~ 🙃🤪🍀
- 🔥 (20211002) Added NanoDet for object detection. ⚡ Super fast and tiny! 1.1Mb only! See demo.
- 🔥 (20210920) Added RobustVideoMatting as lite::cv::matting::RobustVideoMatting ! See demo.
- 🔥 (20210915) Added YOLOP Panoptic 🚗 Perception as lite::cv::detection::YOLOP ! See demo.
- ✅ (20210807) Added YoloR ! Use it through lite::cv::detection::YoloR syntax ! See demo.
- ✅ (20210731) Added RetinaFace-CVPR2020 for face detection, 1.6Mb only! See demo.
- 🔥 (20210721) Added YOLOX! Use it through lite::cv::detection::YoloX syntax ! See demo.
Expand for More Notes.
- ✅ (20210815) Added EfficientDet for object detection! See demo.
- ✅ (20210808) Added ScaledYoloV4 for object detection! See demo.
- ✅ (20210807) Added TinyYoloV4VOC for object detection! See demo.
- ✅ (20210807) Added TinyYoloV4COCO for object detection! See demo.
- ✅ (20210722) Update lite.ai.toolkit.hub.onnx.md ! Lite.AI.Toolkit contains 70+ AI models with 150+ .onnx files now.
⚠️ (20210802) Added GPU Compatibility for CUDAExecutionProvider. See issue#10.⚠️ (20210801) fixed issue#9 YOLOX inference error for non-square shape. See yolox.cpp.- ✅ (20210801) Added FaceBoxes for face detection! See demo.
- ✅ (20210727) Added MobileNetV2SE68、PFLD68 for 68 facial landmarks detection! See demo.
- ✅ (20210726) Added PFLD98 for 98 facial landmarks detection! See demo.
Build the shared lib of Lite.AI.ToolKit for MacOS from sources. Note that Lite.AI.ToolKit uses onnxruntime
as default backend, for the reason that onnxruntime supports the most of onnx's operators. Click
⚠️ Linux and Windows.
- lite.ai.toolkit/opencv2
cp -r you-path-to-downloaded-or-built-opencv/include/opencv4/opencv2 lite.ai.toolkit/opencv2
- lite.ai.toolkit/onnxruntime
cp -r you-path-to-downloaded-or-built-onnxruntime/include/onnxruntime lite.ai.toolkit/onnxruntime
- lite.ai.toolkit/MNN
cp -r you-path-to-downloaded-or-built-MNN/include/MNN lite.ai.toolkit/MNN
- lite.ai.toolkit/ncnn
cp -r you-path-to-downloaded-or-built-ncnn/include/ncnn lite.ai.toolkit/ncnn
- lite.ai.toolkit/tnn
cp -r you-path-to-downloaded-or-built-TNN/include/tnn lite.ai.toolkit/tnn
and put the libs into lite.ai.toolkit/lib directory. Please reference the build-docs1 for third_party.
-
lite.ai.toolkit/lib
cp you-path-to-downloaded-or-built-opencv/lib/*opencv* lite.ai.toolkit/lib cp you-path-to-downloaded-or-built-onnxruntime/lib/*onnxruntime* lite.ai.toolkit/lib cp you-path-to-downloaded-or-built-MNN/lib/*MNN* lite.ai.toolkit/lib cp you-path-to-downloaded-or-built-ncnn/lib/*ncnn* lite.ai.toolkit/lib cp you-path-to-downloaded-or-built-TNN/lib/*TNN* lite.ai.toolkit/lib
-
Windows: You can reference to issue#6
-
Linux: The Docs and Docker image for Linux will be coming soon ~ issue#2
-
Happy News !!! : 🚀 You can download the latest ONNXRuntime official built libs of Windows, Linux, MacOS and Arm !!! Both CPU and GPU versions are available. No more attentions needed pay to build it from source. Download the official built libs from v1.8.1. I have used version 1.7.0 for Lite.AI.ToolKit now, you can downlod it from v1.7.0, but version 1.8.1 should also work, I guess ~ 🙃🤪🍀. For OpenCV, try to build from source(Linux) or down load the official built(Windows) from OpenCV 4.5.3. Then put the includes and libs into specific directory of Lite.AI.ToolKit.
git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git # latest
cd lite.ai.toolkit && sh ./build.sh # On MacOS, you can use the built OpenCV, ONNXRuntime, MNN, NCNN and TNN libs in this repo.
-
GPU Compatibility: See issue#10.
-
To link Lite.AI.ToolKit, you can follow the CMakeLists.txt listed belows.
cmake_minimum_required(VERSION 3.17)
project(lite.ai.toolkit.demo)
set(CMAKE_CXX_STANDARD 11)
# setting up lite.ai.toolkit
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)
set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)
include_directories(${LITE_AI_INCLUDE_DIR})
link_directories(${LITE_AI_LIBRARY_DIR})
set(OpenCV_LIBS
opencv_highgui
opencv_core
opencv_imgcodecs
opencv_imgproc
opencv_video
opencv_videoio
)
# add your executable
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)
add_executable(lite_rvm examples/test_lite_rvm.cpp)
target_link_libraries(lite_rvm
lite.ai.toolkit
onnxruntime
MNN # need, if built lite.ai.toolkit with ENABLE_MNN=ON, default OFF
ncnn # need, if built lite.ai.toolkit with ENABLE_NCNN=ON, default OFF
TNN # need, if built lite.ai.toolkit with ENABLE_TNN=ON, default OFF
${OpenCV_LIBS}) # link lite.ai.toolkit & other libs.
Expand for more details of How to link the shared lib of Lite.AI.ToolKit?
cd ./build/lite.ai.toolkit/lib && otool -L liblite.ai.toolkit.0.0.1.dylib
liblite.ai.toolkit.0.0.1.dylib:
@rpath/liblite.ai.toolkit.0.0.1.dylib (compatibility version 0.0.1, current version 0.0.1)
@rpath/libopencv_highgui.4.5.dylib (compatibility version 4.5.0, current version 4.5.2)
@rpath/libonnxruntime.1.7.0.dylib (compatibility version 0.0.0, current version 1.7.0)
...
cd ../ && tree .
├── bin
├── include
│ ├── lite
│ │ ├── backend.h
│ │ ├── config.h
│ │ └── lite.h
│ └── ort
└── lib
└── liblite.ai.toolkit.0.0.1.dylib
- Run the built examples:
cd ./build/lite.ai.toolkit/bin && ls -lh | grep lite
-rwxr-xr-x 1 root staff 301K Jun 26 23:10 liblite.ai.toolkit.0.0.1.dylib
...
-rwxr-xr-x 1 root staff 196K Jun 26 23:10 lite_yolov4
-rwxr-xr-x 1 root staff 196K Jun 26 23:10 lite_yolov5
...
./lite_yolov5
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/yolov5s.onnx
=============== Input-Dims ==============
...
detected num_anchors: 25200
generate_bboxes num: 66
Default Version Detected Boxes Num: 5
- To link
lite.ai.toolkit
shared lib. You need to make sure thatOpenCV
andonnxruntime
are linked correctly. Just like:
cmake_minimum_required(VERSION 3.17)
project(lite.ai.toolkit.demo)
set(CMAKE_CXX_STANDARD 11)
# setting up lite.ai.toolkit
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)
set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)
include_directories(${LITE_AI_INCLUDE_DIR})
link_directories(${LITE_AI_LIBRARY_DIR})
set(OpenCV_LIBS
opencv_highgui
opencv_core
opencv_imgcodecs
opencv_imgproc
opencv_video
opencv_videoio
)
# add your executable
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)
add_executable(lite_rvm examples/test_lite_rvm.cpp)
target_link_libraries(lite_rvm
lite.ai.toolkit
onnxruntime
MNN
ncnn
${OpenCV_LIBS}) # link lite.ai.toolkit & other libs.
A minimum example to show you how to link the shared lib of Lite.AI.ToolKit correctly for your own project can be found at lite.ai.toolkit.demo.
Lite.AI.ToolKit contains 70+ AI models with 300+ frozen pretrained .onnx/.mnn/.param&bin(ncnn)/.tnnmodel&tnnproto files now. Most of the files are converted by myself. You can use it through lite::cv::Type::Class syntax, such as lite::cv::detection::YoloV5. More details can be found at Examples for Lite.AI.ToolKit.
Expand Details for Namespace and Lite.AI.ToolKit modules.
Namepace | Details |
---|---|
lite::cv::detection | Object Detection. one-stage and anchor-free detectors, YoloV5, YoloV4, SSD, etc. ✅ |
lite::cv::classification | Image Classification. DensNet, ShuffleNet, ResNet, IBNNet, GhostNet, etc. ✅ |
lite::cv::faceid | Face Recognition. ArcFace, CosFace, CurricularFace, etc. ❇️ |
lite::cv::face | Face Analysis. detect, align, pose, attr, etc. ❇️ |
lite::cv::face::detect | Face Detection. UltraFace, RetinaFace, FaceBoxes, PyramidBox, etc. ❇️ |
lite::cv::face::align | Face Alignment. PFLD(106), FaceLandmark1000(1000 landmarks), PRNet, etc. ❇️ |
lite::cv::face::pose | Head Pose Estimation. FSANet, etc. ❇️ |
lite::cv::face::attr | Face Attributes. Emotion, Age, Gender. EmotionFerPlus, VGG16Age, etc. ❇️ |
lite::cv::segmentation | Object Segmentation. Such as FCN, DeepLabV3, etc. |
lite::cv::style | Style Transfer. Contains neural style transfer now, such as FastStyleTransfer. |
lite::cv::matting | Image Matting. Object and Human matting. |
lite::cv::colorization | Colorization. Make Gray image become RGB. |
lite::cv::resolution | Super Resolution. |
Correspondence between the classes in Lite.AI.ToolKit and pretrained model files can be found at lite.ai.toolkit.hub.onnx.md. For examples, the pretrained model files for lite::cv::detection::YoloV5 and lite::cv::detection::YoloX are listed as follows.
Class | Pretrained ONNX Files | Rename or Converted From (Repo) | Size |
---|---|---|---|
lite::cv::detection::YoloV5 | yolov5l.onnx | yolov5 (🔥🔥💥↑) | 188Mb |
lite::cv::detection::YoloV5 | yolov5m.onnx | yolov5 (🔥🔥💥↑) | 85Mb |
lite::cv::detection::YoloV5 | yolov5s.onnx | yolov5 (🔥🔥💥↑) | 29Mb |
lite::cv::detection::YoloV5 | yolov5x.onnx | yolov5 (🔥🔥💥↑) | 351Mb |
lite::cv::detection::YoloX | yolox_x.onnx | YOLOX (🔥🔥!!↑) | 378Mb |
lite::cv::detection::YoloX | yolox_l.onnx | YOLOX (🔥🔥!!↑) | 207Mb |
lite::cv::detection::YoloX | yolox_m.onnx | YOLOX (🔥🔥!!↑) | 97Mb |
lite::cv::detection::YoloX | yolox_s.onnx | YOLOX (🔥🔥!!↑) | 34Mb |
lite::cv::detection::YoloX | yolox_tiny.onnx | YOLOX (🔥🔥!!↑) | 19Mb |
lite::cv::detection::YoloX | yolox_nano.onnx | YOLOX (🔥🔥!!↑) | 3.5Mb |
It means that you can load the the any one yolov5*.onnx
and yolox_*.onnx
according to your application through the same Lite.AI.ToolKit's classes, such as YoloV5, YoloX, etc.
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5x.onnx"); // for server
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5l.onnx");
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5m.onnx");
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5s.onnx"); // for mobile device
auto *yolox = new lite::cv::detection::YoloX("yolox_x.onnx");
auto *yolox = new lite::cv::detection::YoloX("yolox_l.onnx");
auto *yolox = new lite::cv::detection::YoloX("yolox_m.onnx");
auto *yolox = new lite::cv::detection::YoloX("yolox_s.onnx");
auto *yolox = new lite::cv::detection::YoloX("yolox_tiny.onnx");
auto *yolox = new lite::cv::detection::YoloX("yolox_nano.onnx"); // 3.5Mb only !
-
Downloads:
Note, for Google Drive, I can not upload all the *.onnx files because of the storage limitation (15G).- ONNX files 👉 Baidu Drive code: 8gin && Google Drive. See lite.ai.toolkit.hub.onnx.md
- MNN files 👉 Baidu Drive code: 9v63 && Google Drive(Wait). See lite.ai.toolkit.hub.mnn.md
- NCNN files 👉 Baidu Drive code: sc7f && Google Drive(Wait). See lite.ai.toolkit.hub.ncnn.md
- TNN files 👉 Baidu Drive code: 6o6k && Google Drive(Wait). See lite.ai.toolkit.hub.tnn.md
-
Object Detection.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
YoloV5 | 28M | yolov5 | 🔥🔥💥↑ | detection | ✅ | demo | |
YoloV3 | 236M | onnx-models | 🔥🔥🔥↑ | detection | ✅ | demo | |
TinyYoloV3 | 33M | onnx-models | 🔥🔥🔥↑ | detection | ✅ | demo | |
YoloV4 | 176M | YOLOv4... | 🔥🔥🔥↑ | detection | ✅ | demo | |
SSD | 76M | onnx-models | 🔥🔥🔥↑ | detection | ✅ | demo | |
SSDMobileNetV1 | 27M | onnx-models | 🔥🔥🔥↑ | detection | ✅ | demo | |
YoloX | 3.5M | YOLOX | 🔥🔥🔥↑ | detection | ✅ | demo | |
TinyYoloV4VOC | 22M | yolov4-tiny... | 🔥🔥↑ | detection | ✅ | demo | |
TinyYoloV4COCO | 22M | yolov4-tiny... | 🔥🔥↑ | detection | ✅ | demo | |
YoloR | 39M | yolor | 🔥🔥↑ | detection | ✅ | demo | |
ScaledYoloV4 | 270M | ScaledYOLOv4 | 🔥🔥🔥↑ | detection | ✅ | demo | |
EfficientDet | 15M | ...EfficientDet... | 🔥🔥🔥↑ | detection | ✅ | demo | |
EfficientDetD7 | 220M | ...EfficientDet... | 🔥🔥🔥↑ | detection | ✅ | demo | |
EfficientDetD8 | 322M | ...EfficientDet... | 🔥🔥🔥↑ | detection | ✅ | demo | |
YOLOP | 30M | YOLOP | 🔥🔥↑ | detection | ✅ | demo | |
NanoDet | 1.1M | nanodet | 🔥🔥🔥↑ | detection | ✅ | demo | |
NanoDetEfficientNetLite | 12M | nanodet | 🔥🔥🔥↑ | detection | ✅ | demo |
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
YoloX | 3.5M | YOLOX | 🔥🔥🔥↑ | detection | ✅ | demo | |
YOLOP | 30M | YOLOP | 🔥🔥↑ | detection | ✅ | demo | |
NanoDet | 1.1M | nanodet | 🔥🔥🔥↑ | detection | ✅ | demo | |
NanoDetEfficientNetLite | 12M | nanodet | 🔥🔥🔥↑ | detection | ✅ | demo |
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
YoloX | 3.5M | YOLOX | 🔥🔥🔥↑ | detection | ✅ | demo | |
NanoDet | 1.1M | nanodet | 🔥🔥🔥↑ | detection | ✅ | demo | |
NanoDetEfficientNetLite | 12M | nanodet | 🔥🔥🔥↑ | detection | ✅ | demo | |
NanoDetDepreciated | 1.1M | nanodet | 🔥🔥🔥↑ | detection | ✅ | demo | |
NanoDetEfficientNetLiteD... | 12M | nanodet | 🔥🔥🔥↑ | detection | ✅ | demo |
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
YoloX | 3.5M | YOLOX | 🔥🔥🔥↑ | detection | ✅ | demo | |
YOLOP | 30M | YOLOP | 🔥🔥↑ | detection | ✅ | demo | |
NanoDet | 1.1M | nanodet | 🔥🔥🔥↑ | detection | ✅ | demo | |
NanoDetEfficientNetLite | 12M | nanodet | 🔥🔥🔥↑ | detection | ✅ | demo |
- Face Recognition.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
GlintArcFace | 92M | insightface | 🔥🔥🔥↑ | faceid | ✅ | demo | |
GlintCosFace | 92M | insightface | 🔥🔥🔥↑ | faceid | ✅ | demo | |
GlintPartialFC | 170M | insightface | 🔥🔥🔥↑ | faceid | ✅ | demo | |
FaceNet | 89M | facenet... | 🔥🔥🔥↑ | faceid | ✅ | demo | |
FocalArcFace | 166M | face.evoLVe... | 🔥🔥🔥↑ | faceid | ✅ | demo | |
FocalAsiaArcFace | 166M | face.evoLVe... | 🔥🔥🔥↑ | faceid | ✅ | demo | |
TencentCurricularFace | 249M | TFace | 🔥🔥↑ | faceid | ✅ | demo | |
TencentCifpFace | 130M | TFace | 🔥🔥↑ | faceid | ✅ | demo | |
CenterLossFace | 280M | center-loss... | 🔥🔥↑ | faceid | ✅ | demo | |
SphereFace | 80M | sphere... | 🔥🔥↑ | faceid | ✅️ | demo | |
PoseRobustFace | 92M | DREAM | 🔥🔥↑ | faceid | ✅️ | demo | |
NaivePoseRobustFace | 43M | DREAM | 🔥🔥↑ | faceid | ✅️ | demo | |
MobileFaceNet | 3.8M | MobileFace... | 🔥🔥↑ | faceid | ✅ | demo | |
CavaGhostArcFace | 15M | cavaface... | 🔥🔥↑ | faceid | ✅ | demo | |
CavaCombinedFace | 250M | cavaface... | 🔥🔥↑ | faceid | ✅ | demo | |
MobileSEFocalFace | 4.5M | face_recog... | 🔥🔥↑ | faceid | ✅ | demo |
- Matting.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
RobustVideoMatting | 14M | RobustVideoMatting | 🔥🔥🔥↑ | matting | ✅ | demo |
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
RobustVideoMatting | 14M | RobustVideoMatting | 🔥🔥🔥↑ | matting | ✅ | demo |
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
RobustVideoMatting | 14M | RobustVideoMatting | 🔥🔥🔥↑ | matting | code |
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
RobustVideoMatting | 14M | RobustVideoMatting | 🔥🔥🔥↑ | matting | ✅️ | demo |
⚠️ Expand More Details for Lite.AI.ToolKit's Model Zoo.
- Face Detection.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
UltraFace | 1.1M | Ultra-Light... | 🔥🔥🔥↑ | face::detect | ✅ | demo | |
RetinaFace | 1.6M | ...Retinaface | 🔥🔥🔥↑ | face::detect | ✅ | demo | |
FaceBoxes | 3.8M | FaceBoxes | 🔥🔥↑ | face::detect | ✅ | demo |
- Face Alignment.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
PFLD | 1.0M | pfld_106_... | 🔥🔥↑ | face::align | ✅ | demo | |
PFLD98 | 4.8M | PFLD... | 🔥🔥↑ | face::align | ✅️ | demo | |
MobileNetV268 | 9.4M | ...landmark | 🔥🔥↑ | face::align | ✅️️ | demo | |
MobileNetV2SE68 | 11M | ...landmark | 🔥🔥↑ | face::align | ✅️️ | demo | |
PFLD68 | 2.8M | ...landmark | 🔥🔥↑ | face::align | ✅️ | demo | |
FaceLandmark1000 | 2.0M | FaceLandm... | 🔥↑ | face::align | ✅️ | demo |
- Head Pose Estimation.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
FSANet | 1.2M | ...fsanet... | 🔥↑ | face::pose | ✅ | demo |
- Face Attributes.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
AgeGoogleNet | 23M | onnx-models | 🔥🔥🔥↑ | face::attr | ✅ | demo | |
GenderGoogleNet | 23M | onnx-models | 🔥🔥🔥↑ | face::attr | ✅ | demo | |
EmotionFerPlus | 33M | onnx-models | 🔥🔥🔥↑ | face::attr | ✅ | demo | |
VGG16Age | 514M | onnx-models | 🔥🔥🔥↑ | face::attr | ✅ | demo | |
VGG16Gender | 512M | onnx-models | 🔥🔥🔥↑ | face::attr | ✅ | demo | |
SSRNet | 190K | SSR_Net... | 🔥↑ | face::attr | ✅ | demo | |
EfficientEmotion7 | 15M | face-emo... | 🔥↑ | face::attr | ✅️ | demo | |
EfficientEmotion8 | 15M | face-emo... | 🔥↑ | face::attr | ✅ | demo | |
MobileEmotion7 | 13M | face-emo... | 🔥↑ | face::attr | ✅ | demo | |
ReXNetEmotion7 | 30M | face-emo... | 🔥↑ | face::attr | ✅ | demo |
- Classification.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
EfficientNetLite4 | 49M | onnx-models | 🔥🔥🔥↑ | classification | ✅ | demo | |
ShuffleNetV2 | 8.7M | onnx-models | 🔥🔥🔥↑ | classification | ✅ | demo | |
DenseNet121 | 30.7M | torchvision | 🔥🔥🔥↑ | classification | ✅ | demo | |
GhostNet | 20M | torchvision | 🔥🔥🔥↑ | classification | ✅ | demo | |
HdrDNet | 13M | torchvision | 🔥🔥🔥↑ | classification | ✅ | demo | |
IBNNet | 97M | torchvision | 🔥🔥🔥↑ | classification | ✅ | demo | |
MobileNetV2 | 13M | torchvision | 🔥🔥🔥↑ | classification | ✅ | demo | |
ResNet | 44M | torchvision | 🔥🔥🔥↑ | classification | ✅ | demo | |
ResNeXt | 95M | torchvision | 🔥🔥🔥↑ | classification | ✅ | demo |
- Segmentation.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
DeepLabV3ResNet101 | 232M | torchvision | 🔥🔥🔥↑ | segmentation | ✅ | demo | |
FCNResNet101 | 207M | torchvision | 🔥🔥🔥↑ | segmentation | ✅ | demo |
- Style Transfer.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
FastStyleTransfer | 6.4M | onnx-models | 🔥🔥🔥↑ | style | ✅ | demo |
- Colorization.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
Colorizer | 123M | colorization | 🔥🔥🔥↑ | colorization | ✅ | demo |
- Super Resolution.
Class | Size | From | Awesome | File | Type | State | Usage |
---|---|---|---|---|---|---|---|
SubPixelCNN | 234K | ...PIXEL... | 🔥↑ | resolution | ✅ | demo |
More examples can be found at lite.ai.toolkit.examples. Click
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
std::vector<lite::types::Boxf> detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
yolov5->detect(img_bgr, detected_boxes);
lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);
delete yolov5;
}
The output is:
Or you can use Newest 🔥🔥 ! YOLO series's detector YOLOX or YoloR. They got the similar results.
Example1: Video Matting using RobustVideoMatting2021🔥🔥🔥. Download model from Model-Zoo2.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
std::vector<lite::types::MattingContent> contents;
// 1. video matting.
rvm->detect_video(video_path, output_path, contents, false, 0.4f);
delete rvm;
}
The output is:
Example2: 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);
lite::types::Landmarks landmarks;
cv::Mat img_bgr = cv::imread(test_img_path);
face_landmarks_1000->detect(img_bgr, landmarks);
lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
cv::imwrite(save_img_path, img_bgr);
delete face_landmarks_1000;
}
The output is:
Example3: Colorization using colorization. Download model from Model-Zoo2.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
cv::Mat img_bgr = cv::imread(test_img_path);
lite::types::ColorizeContent colorize_content;
colorizer->detect(img_bgr, colorize_content);
if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
delete colorizer;
}
The output is:
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";
auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);
lite::types::FaceContent face_content0, face_content1, face_content2;
cv::Mat img_bgr0 = cv::imread(test_img_path0);
cv::Mat img_bgr1 = cv::imread(test_img_path1);
cv::Mat img_bgr2 = cv::imread(test_img_path2);
glint_arcface->detect(img_bgr0, face_content0);
glint_arcface->detect(img_bgr1, face_content1);
glint_arcface->detect(img_bgr2, face_content2);
if (face_content0.flag && face_content1.flag && face_content2.flag)
{
float sim01 = lite::utils::math::cosine_similarity<float>(
face_content0.embedding, face_content1.embedding);
float sim02 = lite::utils::math::cosine_similarity<float>(
face_content0.embedding, face_content2.embedding);
std::cout << "Detected Sim01: " << sim << " Sim02: " << sim02 << std::endl;
}
delete glint_arcface;
}
The output is:
Detected Sim01: 0.721159 Sim02: -0.0626267
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";
auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);
std::vector<lite::types::Boxf> detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
ultraface->detect(img_bgr, detected_boxes);
lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);
delete ultraface;
}
The output is:
⚠️ Expand All Examples for Each Topic in Lite.AI.ToolKit
3.1 Expand Examples for Object Detection.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
std::vector<lite::types::Boxf> detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
yolov5->detect(img_bgr, detected_boxes);
lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);
delete yolov5;
}
The output is:
Or you can use Newest 🔥🔥 ! YOLO series's detector YOLOX . They got the similar results.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/yolox_s.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_yolox_1.jpg";
std::string save_img_path = "../../../logs/test_lite_yolox_1.jpg";
auto *yolox = new lite::cv::detection::YoloX(onnx_path);
std::vector<lite::types::Boxf> detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
yolox->detect(img_bgr, detected_boxes);
lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);
delete yolox;
}
The output is:
More classes for general object detection.
auto *detector = new lite::cv::detection::YoloX(onnx_path); // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YoloV4(onnx_path);
auto *detector = new lite::cv::detection::YoloV3(onnx_path);
auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path);
auto *detector = new lite::cv::detection::SSD(onnx_path);
auto *detector = new lite::cv::detection::YoloV5(onnx_path);
auto *detector = new lite::cv::detection::YoloR(onnx_path); // Newest YOLO detector !!! 2021-05
auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path);
auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path);
auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path);
auto *detector = new lite::cv::detection::EfficientDet(onnx_path);
auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path);
auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path);
auto *detector = new lite::cv::detection::YOLOP(onnx_path);
auto *detector = new lite::cv::detection::NanoDet(onnx_path); // Super fast and tiny!
auto *detector = new lite::cv::detection::NanoDetEfficientNetLite(onnx_path); // Super fast and tiny!
3.2 Expand Examples for Face Recognition.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";
auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);
lite::types::FaceContent face_content0, face_content1, face_content2;
cv::Mat img_bgr0 = cv::imread(test_img_path0);
cv::Mat img_bgr1 = cv::imread(test_img_path1);
cv::Mat img_bgr2 = cv::imread(test_img_path2);
glint_arcface->detect(img_bgr0, face_content0);
glint_arcface->detect(img_bgr1, face_content1);
glint_arcface->detect(img_bgr2, face_content2);
if (face_content0.flag && face_content1.flag && face_content2.flag)
{
float sim01 = lite::utils::math::cosine_similarity<float>(
face_content0.embedding, face_content1.embedding);
float sim02 = lite::utils::math::cosine_similarity<float>(
face_content0.embedding, face_content2.embedding);
std::cout << "Detected Sim01: " << sim << " Sim02: " << sim02 << std::endl;
}
delete glint_arcface;
}
The output is:
Detected Sim01: 0.721159 Sim02: -0.0626267
More classes for face recognition.
auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::FaceNet(onnx_path);
auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path);
auto *recognition = new lite::cv::faceid::SphereFace(onnx_path);
auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only !
auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !
3.3 Expand Examples for Segmentation.
3.3 Segmentation using DeepLabV3ResNet101. Download model from Model-Zoo2.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/deeplabv3_resnet101_coco.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";
std::string save_img_path = "../../../logs/test_lite_deeplabv3_resnet101.jpg";
auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads
lite::types::SegmentContent content;
cv::Mat img_bgr = cv::imread(test_img_path);
deeplabv3_resnet101->detect(img_bgr, content);
if (content.flag)
{
cv::Mat out_img;
cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img);
cv::imwrite(save_img_path, out_img);
if (!content.names_map.empty())
{
for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)
{
std::cout << it->first << " Name: " << it->second << std::endl;
}
}
}
delete deeplabv3_resnet101;
}
The output is:
More classes for segmentation.
auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path);
3.4 Expand Examples for Face Attributes Analysis.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/ssrnet.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg";
std::string save_img_path = "../../../logs/test_lite_ssrnet.jpg";
lite::cv::face::attr::SSRNet *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path);
lite::types::Age age;
cv::Mat img_bgr = cv::imread(test_img_path);
ssrnet->detect(img_bgr, age);
lite::utils::draw_age_inplace(img_bgr, age);
cv::imwrite(save_img_path, img_bgr);
std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl;
delete ssrnet;
}
The output is:
More classes for face attributes analysis.
auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path);
auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path);
auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path);
auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions
auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions
3.5 Expand Examples for Image Classification.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/densenet121.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg";
auto *densenet = new lite::cv::classification::DenseNet(onnx_path);
lite::types::ImageNetContent content;
cv::Mat img_bgr = cv::imread(test_img_path);
densenet->detect(img_bgr, content);
if (content.flag)
{
const unsigned int top_k = content.scores.size();
if (top_k > 0)
{
for (unsigned int i = 0; i < top_k; ++i)
std::cout << i + 1
<< ": " << content.labels.at(i)
<< ": " << content.texts.at(i)
<< ": " << content.scores.at(i)
<< std::endl;
}
}
delete densenet;
}
The output is:
More classes for image classification.
auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path);
auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path);
auto *classifier = new lite::cv::classification::GhostNet(onnx_path);
auto *classifier = new lite::cv::classification::HdrDNet(onnx_path);
auto *classifier = new lite::cv::classification::IBNNet(onnx_path);
auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path);
auto *classifier = new lite::cv::classification::ResNet(onnx_path);
auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);
3.6 Expand Examples for Face Detection.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";
auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);
std::vector<lite::types::Boxf> detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
ultraface->detect(img_bgr, detected_boxes);
lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);
delete ultraface;
}
The output is:
More classes for face detection.
auto *detector = new lite::face::detect::UltraFace(onnx_path); // 1.1Mb only !
auto *detector = new lite::face::detect::FaceBoxes(onnx_path); // 3.8Mb only !
auto *detector = new lite::face::detect::RetinaFace(onnx_path); // 1.6Mb only ! CVPR2020
3.7 Expand Examples for Colorization.
3.7 Colorization using colorization. Download model from Model-Zoo2.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
cv::Mat img_bgr = cv::imread(test_img_path);
lite::types::ColorizeContent colorize_content;
colorizer->detect(img_bgr, colorize_content);
if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
delete colorizer;
}
The output is:
3.8 Expand Examples for Head Pose Estimation.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/fsanet-var.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg";
std::string save_img_path = "../../../logs/test_lite_fsanet.jpg";
auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path);
cv::Mat img_bgr = cv::imread(test_img_path);
lite::types::EulerAngles euler_angles;
fsanet->detect(img_bgr, euler_angles);
if (euler_angles.flag)
{
lite::utils::draw_axis_inplace(img_bgr, euler_angles);
cv::imwrite(save_img_path, img_bgr);
std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl;
}
delete fsanet;
}
The output is:
3.9 Expand Examples for Face Alignment.
3.9 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);
lite::types::Landmarks landmarks;
cv::Mat img_bgr = cv::imread(test_img_path);
face_landmarks_1000->detect(img_bgr, landmarks);
lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
cv::imwrite(save_img_path, img_bgr);
delete face_landmarks_1000;
}
The output is:
More classes for face alignment.
auto *align = new lite::cv::face::align::PFLD(onnx_path); // 106 landmarks
auto *align = new lite::cv::face::align::PFLD98(onnx_path); // 98 landmarks
auto *align = new lite::cv::face::align::PFLD68(onnx_path); // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path); // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path); // 68 landmarks
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path); // 1000 landmarks !
3.10 Expand Examples for Style Transfer.
3.10 Style Transfer using FastStyleTransfer. Download model from Model-Zoo2.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/style-candy-8.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";
std::string save_img_path = "../../../logs/test_lite_fast_style_transfer_candy.jpg";
auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path);
lite::types::StyleContent style_content;
cv::Mat img_bgr = cv::imread(test_img_path);
fast_style_transfer->detect(img_bgr, style_content);
if (style_content.flag) cv::imwrite(save_img_path, style_content.mat);
delete fast_style_transfer;
}
The output is:
3.11 Expand Examples for Image Matting.
3.11 Video Matting using RobustVideoMatting. Download model from Model-Zoo2.
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
std::vector<lite::types::MattingContent> contents;
// 1. video matting.
rvm->detect_video(video_path, output_path, contents);
delete rvm;
}
The output is:
More details of Default Version APIs can be found at api.default.md . For examples, the interface for YoloV5 is:
lite::cv::detection::YoloV5
void detect(const cv::Mat &mat, std::vector<types::Boxf> &detected_boxes,
float score_threshold = 0.25f, float iou_threshold = 0.45f,
unsigned int topk = 100, unsigned int nms_type = NMS::OFFSET);
Expand for ONNXRuntime, MNN, NCNN and TNN version APIs.
More details of ONNXRuntime Version APIs can be found at api.onnxruntime.md . For examples, the interface for YoloV5 is:
lite::onnxruntime::cv::detection::YoloV5
void detect(const cv::Mat &mat, std::vector<types::Boxf> &detected_boxes,
float score_threshold = 0.25f, float iou_threshold = 0.45f,
unsigned int topk = 100, unsigned int nms_type = NMS::OFFSET);
(todo
lite::mnn::cv::detection::YoloV5
lite::mnn::cv::detection::YoloV4
lite::mnn::cv::detection::YoloV3
lite::mnn::cv::detection::SSD
...
(todo
lite::ncnn::cv::detection::YoloV5
lite::ncnn::cv::detection::YoloV4
lite::ncnn::cv::detection::YoloV3
lite::ncnn::cv::detection::SSD
...
(todo
lite::tnn::cv::detection::YoloV5
lite::tnn::cv::detection::YoloV4
lite::tnn::cv::detection::YoloV3
lite::tnn::cv::detection::SSD
...
Expand More Details for Other Docs.
- Rapid implementation of your inference using BasicOrtHandler
- Some very useful onnxruntime c++ interfaces
- How to compile a single model in this library you needed
- How to convert SubPixelCNN to ONNX and implements with onnxruntime c++
- How to convert Colorizer to ONNX and implements with onnxruntime c++
- How to convert SSRNet to ONNX and implements with onnxruntime c++
- How to convert YoloV3 to ONNX and implements with onnxruntime c++
- How to convert YoloV5 to ONNX and implements with onnxruntime c++
5.2 Docs for third_party.
Other build documents for different engines and different targets will be added later.
Library | Target | Docs |
---|---|---|
OpenCV | mac-x86_64 | opencv-mac-x86_64-build.zh.md |
OpenCV | android-arm | opencv-static-android-arm-build.zh.md |
onnxruntime | mac-x86_64 | onnxruntime-mac-x86_64-build.zh.md |
onnxruntime | android-arm | onnxruntime-android-arm-build.zh.md |
NCNN | mac-x86_64 | todo |
MNN | mac-x86_64 | todo |
TNN | mac-x86_64 | todo |
The code of Lite.AI.ToolKit is released under the GPL-3.0 License.
Many thanks to these following projects. All the Lite.AI.ToolKit's models are sourced from these repos.
- RobustVideoMatting (🔥🔥🔥new!!↑)
- nanodet (🔥🔥🔥↑)
- YOLOX (🔥🔥🔥new!!↑)
- YOLOP (🔥🔥new!!↑)
- YOLOR (🔥🔥new!!↑)
- ScaledYOLOv4 (🔥🔥🔥↑)
- insightface (🔥🔥🔥↑)
- yolov5 (🔥🔥💥↑)
- TFace (🔥🔥↑)
- YOLOv4-pytorch (🔥🔥🔥↑)
- Ultra-Light-Fast-Generic-Face-Detector-1MB (🔥🔥🔥↑)
Expand for More References.
- headpose-fsanet-pytorch (🔥↑)
- pfld_106_face_landmarks (🔥🔥↑)
- onnx-models (🔥🔥🔥↑)
- SSR_Net_Pytorch (🔥↑)
- colorization (🔥🔥🔥↑)
- SUB_PIXEL_CNN (🔥↑)
- torchvision (🔥🔥🔥↑)
- facenet-pytorch (🔥↑)
- face.evoLVe.PyTorch (🔥🔥🔥↑)
- center-loss.pytorch (🔥🔥↑)
- sphereface_pytorch (🔥🔥↑)
- DREAM (🔥🔥↑)
- MobileFaceNet_Pytorch (🔥🔥↑)
- cavaface.pytorch (🔥🔥↑)
- CurricularFace (🔥🔥↑)
- face-emotion-recognition (🔥↑)
- face_recognition.pytorch (🔥🔥↑)
- PFLD-pytorch (🔥🔥↑)
- pytorch_face_landmark (🔥🔥↑)
- FaceLandmark1000 (🔥🔥↑)
- Pytorch_Retinaface (🔥🔥🔥↑)
- FaceBoxes (🔥🔥↑)
Cite it as follows if you use Lite.AI.ToolKit.
@misc{lite.ai.toolkit2021,
title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
url={https://github.com/DefTruth/lite.ai.toolkit},
note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
author={Yan Jun},
year={2021}
}
If there is a model you are interested in and want to be supported by Lite.AI.ToolKit🚀🚀🌟, you can fork this repo and modify TODOLIST.md, then submit a PR~ I will review this PR and try to support this model in the future, but I don’t make sure this can be done. In addition, MNN, NCNN and TNN support for some models will be added in the future, but due to operator compatibility and some other reasons, it is impossible to ensure that all models supported by ONNXRuntime C++ can run through MNN, NCNN and TNN. So, if you want to use all the models supported by this repo and don't care about the performance gap of 1~2ms, please use the implementation of ONNXRuntime version. ONNXRuntime is the default inference engine for this repo. However, you can follow the steps below if you want to build Lite.AI.ToolKit🚀🚀🌟 with MNN, NCNN or TNN support (
- change the
build.sh
withDENABLE_MNN=ON
,DENABLE_NCNN=ON
orDENABLE_TNN=ON
, such as
cd build && cmake \
-DCMAKE_BUILD_TYPE=MinSizeRel \
-DINCLUDE_OPENCV=ON \ # Whether to package OpenCV into lite.ai.toolkit, default ON; otherwise, you need to setup OpenCV yourself.
-DENABLE_MNN=ON \ # Whether to build with MNN, default OFF, only some models are supported now.
-DENABLE_NCNN=OFF \ # Whether to build with NCNN, default OFF, only some models are supported now.
-DENABLE_TNN=OFF \ # Whether to build with TNN, default OFF, only some models are supported now.
.. && make -j8
- use the MNN, NCNN or TNN version interface, see demo, such as
auto *nanodet = new lite::mnn::cv::detection::NanoDet(mnn_path);
auto *nanodet = new lite::tnn::cv::detection::NanoDet(proto_path, model_path);
auto *nanodet = new lite::ncnn::cv::detection::NanoDet(param_path, bin_path);