-
The University of Melbourne
- https://scholar.google.com.au/citations?user=19yx_o4AAAAJ&hl=en
Stars
Super-Efficient RLHF Training of LLMs with Parameter Reallocation
[NeurIPS 2024 Best Paper][GPT beats diffusion๐ฅ] [scaling laws in visual generation๐] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultโฆ
Large Language Models for Supply Chain Optimization
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use thโฆ
Shortcuts Everywhere and Nowhere: Exploring Multi-Trigger Backdoor Attacks
[SIGGRAPH'24] 2D Gaussian Splatting for Geometrically Accurate Radiance Fields
Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
A Unified Framework for Surface Reconstruction
This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"
A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 ๐ and reasoning techniques.
A 3DGS framework for omni urban scene reconstruction and simulation.
[ECCV 2022] Tensorial Radiance Fields, a novel approach to model and reconstruct radiance fields
Instant neural graphics primitives: lightning fast NeRF and more
A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF
๐ [ECCV'24 Oral] MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images
Official code for "EagerMOT: 3D Multi-Object Tracking via Sensor Fusion" [ICRA 2021]
A collaboration friendly studio for NeRFs
A unified framework for 3D content generation.
This repository is used to collect NeRF papers on autonomous driving
[ICLR 2023] S-NeRF: Neural Radiance Fields for Street Views
[ECCV'2024] Gaussian Grouping for open-world Anything reconstruction, segmentation and editing.
Periodic Vibration Gaussian: Dynamic Urban Scene Reconstruction and Real-time Rendering
[EMNLP 2024 Demo] TinyAgent: Function Calling at the Edge!
[NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation
[ICLR 2024] Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting