-
The University of Hong Kong
- Hong Kong
-
21:45
(UTC +08:00) - tianxingchen.github.io
- https://orcid.org/0009-0000-7672-6797
- @MarioChan2002
- https://scholar.google.com/citations?user=pvS8MH8AAAAJ&hl=en
Stars
[CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation
official repo for "Text2World: Benchmarking Large Language Models for Symbolic World Model Generation"
DeepTimber社区具身智能招贤榜 | A list for Embodied AI / Robotics Jobs (PhD, RA, intern, full-time, etc
Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Google Robot, WidowX+Bridge) (CoRL 2024)
A collection of high-quality models for the MuJoCo physics engine, curated by Google DeepMind.
Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence
It's not a list of papers, but a list of paper reading lists...
A generative world for general-purpose robotics & embodied AI learning.
Where2Act: From Pixels to Actions for Articulated 3D Objects
[CVPR 25] G3Flow: Generative 3D Semantic Flow for Pose-aware and Generalizable Object Manipulation
The reinforcement learning training code for AgiBot X1.
A curated list of awesome open-source grasping libraries and resources
Articulated Object Manipulation using Online Axis Estimation with SAM2-Based Tracking
[CVPR 25] RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins
Official Repo for ManiSkill-ViTac Challenge 2024
Demo-Driven Mobile Bi-Manual Manipulation Benchmark.
[ICLR 2024] DiffTactile: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation
[ICRA 2023] NIFT: Neural Interaction Field and Template for Object Manipulation
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
RialTo Policy Learning Pipeline
Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.
Theia: Distilling Diverse Vision Foundation Models for Robot Learning