Skip to content

Easy & Modular Computer Vision Detectors and Trackers - Run YOLOv8,v7,v6,v5,R,X in under 20 lines of code.

License

Notifications You must be signed in to change notification settings

xpertdev/AS-One

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AS-One v2 : A Modular Library for YOLO Object Detection, Segmentation, Tracking & Pose

πŸ‘‹ Hello

==UPDATE: ASOne v2 is now out! We've updated with YOLOV9 and SAM==

AS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as ByteTrack, DeepSORT or NorFair can be integrated with different versions of YOLO with minimum lines of code. This python wrapper provides YOLO models in ONNX, PyTorch & CoreML flavors. We plan to offer support for future versions of YOLO when they get released.

This is One Library for most of your computer vision needs.

If you would like to dive deeper into YOLO Object Detection and Tracking, then check out our courses and projects

Watch the step-by-step tutorial 🀝

πŸ’» Install

πŸ”₯ Prerequisites
pip install asone

For windows machine, you will need to install from source to run asone library. Check out instructions in πŸ‘‰ Install from Source section below to install on windows.

πŸ‘‰ Install from Source

πŸ’Ύ Clone the Repository

Navigate to an empty folder of your choice.

git clone https://github.com/augmentedstartups/AS-One.git

Change Directory to AS-One

cd AS-One

πŸ‘‰ For Linux
python3 -m venv .env
source .env/bin/activate

pip install -r requirements.txt

# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
πŸ‘‰ For Windows 10/11
python -m venv .env
.env\Scripts\activate
pip install numpy Cython
pip install lap
pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox

pip install asone onnxruntime-gpu==1.12.1
pip install typing_extensions==4.7.1
pip install super-gradients==3.1.3
# for CPU
pip install torch torchvision

# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
or
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
πŸ‘‰ For MacOS
python3 -m venv .env
source .env/bin/activate


pip install -r requirements.txt

# for CPU
pip install torch torchvision

Quick Start πŸƒβ€β™‚οΈ

Use tracker on sample video.

import asone
from asone import ASOne

model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])

for model_output in tracks:
    annotations = ASOne.draw(model_output, display=False)

Run in Google Colab πŸ’»

Open In Colab

Sample Code Snippets πŸ“ƒ

6.1 πŸ‘‰ Object Detection
import asone
from asone import ASOne

model = ASOne(detector=asone.YOLOV9_C, use_cuda=True) # Set use_cuda to False for cpu
vid = model.read_video('data/sample_videos/test.mp4')

for img in vid:
    detection = model.detecter(img)
    annotations = ASOne.draw(detection, img=img, display=True)

Run the asone/demo_detector.py to test detector.

# run on gpu
python -m asone.demo_detector data/sample_videos/test.mp4

# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
6.1.1 πŸ‘‰ Use Custom Trained Weights for Detector

Use your custom weights of a detector model trained on custom data by simply providing path of the weights file.

import asone
from asone import ASOne

model = ASOne(detector=asone.YOLOV9_C, weights='data/custom_weights/yolov7_custom.pt', use_cuda=True) # Set use_cuda to False for cpu
vid = model.read_video('data/sample_videos/license_video.mp4')

for img in vid:
    detection = model.detecter(img)
    annotations = ASOne.draw(detection, img=img, display=True, class_names=['license_plate'])
6.1.2 πŸ‘‰ Changing Detector Models

Change detector by simply changing detector flag. The flags are provided in benchmark tables.

  • Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.
# Change detector
model = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)

# For macOs
# YOLO5
model = ASOne(detector=asone.YOLOV5X_MLMODEL)
# YOLO7
model = ASOne(detector=asone.YOLOV7_MLMODEL)
# YOLO8
model = ASOne(detector=asone.YOLOV8L_MLMODEL)
6.2 πŸ‘‰ Object Tracking

Use tracker on sample video.

import asone
from asone import ASOne

# Instantiate Asone object
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])

# Loop over track to retrieve outputs of each frame
for model_output in tracks:
    annotations = ASOne.draw(model_output, display=True)
    # Do anything with bboxes here

[Note] Use can use custom weights for a detector model by simply providing path of the weights file. in ASOne class.

6.2.1 πŸ‘‰ Changing Detector and Tracking Models

Change Tracker by simply changing the tracker flag.

The flags are provided in benchmark tables.

model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
# Change tracker
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV9_C, use_cuda=True)
# Change Detector
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)

Run the asone/demo_tracker.py to test detector.

# run on gpu
python -m asone.demo_tracker data/sample_videos/test.mp4

# run on cpu
python -m asone.demo_tracker data/sample_videos/test.mp4 --cpu
6.3 πŸ‘‰ Segmentation
import asone
from asone import ASOne

model = ASOne(detector=asone.YOLOV9_C, segmentor=asone.SAM, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_detecter('data/sample_videos/test.mp4', filter_classes=['car'])

for model_output in tracks:
    annotations = ASOne.draw_masks(model_output, display=True) # Draw masks
6.4 πŸ‘‰ Text Detection

Sample code to detect text on an image

# Detect and recognize text
import asone
from asone import ASOne, utils
import cv2

model = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu
img = cv2.imread('data/sample_imgs/sample_text.jpeg')
results = model.detect_text(img)
annotations = utils.draw_text(img, results, display=True)

Use Tracker on Text

import asone
from asone import ASOne

# Instantiate Asone object
model = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_tracker('data/sample_videos/GTA_5-Unique_License_Plate.mp4')

# Loop over track to retrieve outputs of each frame
for model_output in tracks:
    annotations = ASOne.draw(model_output, display=True)

    # Do anything with bboxes here

Run the asone/demo_ocr.py to test ocr.

# run on gpu
 python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4

# run on cpu
 python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4 --cpu
6.5 πŸ‘‰ Pose Estimation

Sample code to estimate pose on an image

# Pose Estimation
import asone
from asone import PoseEstimator, utils
import cv2

model = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu
img = cv2.imread('data/sample_imgs/test2.jpg')
kpts = model.estimate_image(img)
annotations = utils.draw_kpts(kpts, image=img, display=True)
  • Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in benchmark tables.
# Pose Estimation on video
import asone
from asone import PoseEstimator, utils

model = PoseEstimator(estimator_flag=asone.YOLOV7_W6_POSE, use_cuda=True) #set use_cuda=False to use cpu
estimator = model.video_estimator('data/sample_videos/football1.mp4')
for model_output in estimator:
    annotations = utils.draw_kpts(model_output)
    # Do anything with kpts here

Run the asone/demo_pose_estimator.py to test Pose estimation.

# run on gpu
 python -m asone.demo_pose_estimator data/sample_videos/football1.mp4

# run on cpu
 python -m asone.demo_pose_estimator data/sample_videos/football1.mp4 --cpu

To setup ASOne using Docker follow instructions given in docker setup🐳

ToDo πŸ“

  • First Release
  • Import trained models
  • Simplify code even further
  • Updated for YOLOv8
  • OCR and Counting
  • OCSORT, StrongSORT, MoTPy
  • M1/2 Apple Silicon Compatibility
  • Pose Estimation YOLOv7/v8
  • YOLO-NAS
  • Updated for YOLOv8.1
  • YOLOV9
  • SAM Integration
Offered By πŸ’Ό : Maintained By πŸ‘¨β€πŸ’» :
AugmentedStarups AxcelerateAI

About

Easy & Modular Computer Vision Detectors and Trackers - Run YOLOv8,v7,v6,v5,R,X in under 20 lines of code.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.5%
  • Cython 0.3%
  • Shell 0.1%
  • Cuda 0.1%
  • C++ 0.0%
  • Batchfile 0.0%