Stars
You Only Look Once for Panopitic Driving Perception.(MIR2022)
visual point clouds (with bbox) by Plotly
Traffic light detection using deep learning with the YOLOv3 framework. PyTorch => YOLOv3
Real-Time Detection And Classification of Traffic Signs using YOLOv5s object detection algorithm. [AI PROJECT]
Using the python library YOLOV8 this model can detect traffic lights and the state of the traffic light
Use yolov5 for traffic sign detection
Repository for submodules containing code for MMAR 2023 "Detection-segmentation convolutional neural network for autonomous vehicle perception" paper
FlagPerf is an open-source software platform for benchmarking AI chips.
jsnoc / FlagPerf
Forked from FlagOpen/FlagPerfFlagPerf is an open-source software platform for benchmarking AI chips.
MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge Distillation (TCSVT 2022)
Off-the-shelf PWC-Net module in PyTorch-1.0+
A Keypoint-based Global Association Network for Lane Detection. Accepted by CVPR 2022
PyTorch Lightning Optical Flow models, scripts, and pretrained weights.
ONNX and TensorRT inference demo for Unimatch
Ultra Fast Deep Lane Detection With Hybrid Anchor Driven Ordinal Classification (TPAMI 2022)
Official implementation for paper LIMPQ, "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance", ECCV 2022
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Quantization of Convolutional Neural networks.
Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
TheBloke / AutoGPTQ
Forked from AutoGPTQ/AutoGPTQAn easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
4 bits quantization of LLaMA using GPTQ
A high-throughput and memory-efficient inference and serving engine for LLMs
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"
[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration