AI部署
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
The Tensor Algebra SuperOptimizer for Deep Learning
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Caffe: a fast open framework for deep learning.
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end…
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.