Stars
2024中国翻墙软件VPN推荐以及科学上网避坑,稳定好用。对比SSR机场、蓝灯、V2ray、老王VPN、VPS搭建梯子等科学上网与翻墙软件,中国最新科学上网翻墙梯子VPN下载推荐,访问Chatgpt。
2D Graphic Library optimized for Cortex-M processors
Materials for the Learn PyTorch for Deep Learning: Zero to Mastery course.
An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).
Linear algebra foundation for the Rust programming language
[SIGGRAPH Asia'24 & TOG] Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes
[CVPR 2024] Official PyTorch implementation of SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering
OKVIS: Open Keyframe-based Visual-Inertial SLAM.
BlendedMVS: A Large-scale Dataset for Generalized Multi-view Stereo Networks
[CVPR 2024 Highlight] FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
✔(已完结)最全面的 深度学习 笔记【土堆 Pytorch】【李沐 动手学深度学习】【吴恩达 深度学习】
本项目将《动手学深度学习》(Dive into Deep Learning)原书中的MXNet实现改为TensorFlow 2.0实现,项目已得到李沐老师的认可
[3DV 2024 Oral] DeDoDe 🎶 Detect, Don't Describe --- Describe, Don't Detect, for Local Feature Matching
PyTorch入门教程,在线阅读地址:https://datawhalechina.github.io/thorough-pytorch/
A General Simultaneous Localization and Mapping Framework which supports feature based or direct method and different sensors including monocular camera, RGB-D sensors or any other input types can …
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
The English version of 14 lectures on visual SLAM.
Official repository for CVPR 2021 paper "Differentiable Diffusion for Dense Depth Estimation from Multi-view Images"
This code contains an algorithm to compute monocular visual odometry by using both point and line segment features, based on the open source version of SVO.
Source code for the ECCV 2022 paper "Benchmarking Localization and Mapping for Augmented Reality".
Pytorch code for ICRA'22 paper: "Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation"
Related papers and codes for vision-based robotic grasping