-
Chinese University of Hong Kong
- Hong Kong
Highlights
- Pro
Stars
SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model @ ICCV 2023 **AND** SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-tr…
[ECCV2022] Factorizing Knowledge in Neural Networks
Mutual Information Neural Estimation in Pytorch
[NeurIPS 2024] WATT: Weight Average Test-Time Adaption of CLIP
Demonstrations of Loss of Plasticity and Implementation of Continual Backpropagation
BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks
This is the official code release for our work, Denoising Vision Transformers.
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models
Surgical Visual Question Answering. A transformer-based surgical VQA model. Offical Implementation of "Surgical-VQA: Visual Question Answering in Surgical Scenes using Transformers", MICCAI 2022.
CVPR 2024 Paper: Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer
[ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models
[ICCV 2023] CLIP-Driven Universal Model; Rank first in MSD Competition.
[MICCAI 2024] Embracing Massive Medical Data
Toward Universal Medical Image Registration via Sharpness-Aware Meta-Continual Learning (MICCAI 2024)
[CVPR 2024] Interactive continual learning: Fast and slow thinking
The official implementation of the CVPR'2024 work Interference-Free Low-Rank Adaptation for Continual Learning
Code for paper "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters" CVPR2024
Accepted in CVPR 2023
[CVPR2024 Oral && Best Paper Candidate] CorrMLP: Correlation-aware MLP-based networks for deformable medical image registration
[CVPR 2024 Oral] MemSAM: Taming Segment Anything Model for Echocardiography Video Segmentation.
The code repository for "Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning"(CVPR24) in PyTorch.
Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality (NeurIPS 2023, Spotlight)
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Continual Forgetting for Pre-trained Vision Models (CVPR 2024)
Class-Incremental Learning: A Survey (TPAMI 2024)