Stars
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!
Segment Anything in High Quality [NeurIPS 2023]
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2
Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"
Pytorch implementation of diffusion models on Lie Groups for 6D grasp pose generation https://sites.google.com/view/se3dif/home
sagieppel / fine-tune-train_segment_anything_2_in_60_lines_of_code
Forked from facebookresearch/sam2The repository provides code for training/fine tune the Meta Segment Anything Model 2 (SAM 2)
Distributed Robot Interaction Dataset.
Implementation of The paper SA_Net for point cloud completion
Sim-Grasp offers a simulation framework to generate synthetic data and train models for robotic two finger grasping in cluttered environments.
M.Sc Thesis - 3D_DEN: Open-ended 3D Object Recognition using Dynamically Expandable Networks
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) fo…