Stars
[CoRL 2024] DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes
Summary of key papers and blogs about diffusion models to learn about the topic. Detailed list of all published diffusion robotics papers.
A generative world for general-purpose robotics & embodied AI learning.
Pytorch implementation of diffusion models on Lie Groups for 6D grasp pose generation https://sites.google.com/view/se3dif/home
PyTorch implementation of Pointnet2/Pointnet++
WiLoR: End-to-end 3D hand localization and reconstruction in-the-wild
The official implementation of "Segment Anything with Multiple Modalities".
VisionOS App + Python Library to stream head / wrist / finger tracking data from Vision Pro to any robots.
Official code for "UniDexGrasp: Universal Robotic Dexterous Grasping via Learning Diverse Proposal Generation and Goal-Conditioned Policy" (CVPR 2023)
Official Code for Dexterous Grasp Transformer (CVPR 2024)
ROS package to control the SimLab Allegro Hand
Official repository of "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"
Code & Models for 3DETR - an End-to-end transformer model for 3D object detection
Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation
Pytorch implementation of Contact-GraspNet
SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation
Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts
Sim-Grasp offers a simulation framework to generate synthetic data and train models for robotic two finger grasping in cluttered environments.
RPEFlow: Multimodal Fusion of RGB-PointCloud-Event for Joint Optical Flow and Scene Flow Estimation (ICCV 2023)
M2T2: Multi-Task Masked Transformer for Object-centric Pick and Plac
List of projects for 3d reconstruction
From One Hand to Multiple Hands: Imitation Learning for Dexterous Manipulation from Single-Camera Teleoperation, RA-L IROS 2022
ECCV 2024 论文和开源项目合集,同时欢迎各位大佬提交issue,分享ECCV 2024论文和开源项目
[CVPR 2024] Official implement of <Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation>
sagieppel / fine-tune-train_segment_anything_2_in_60_lines_of_code
Forked from facebookresearch/sam2The repository provides code for training/fine tune the Meta Segment Anything Model 2 (SAM 2)
shreyashampali / ho3d
Forked from lmb-freiburg/freihandA dataset for pose estimation of hand when interacting with object and severe occlusions.
Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2