Stars
Pytorch reimplementation for "Gradient Surgery for Multi-Task Learning"
Existing Literature about Machine Unlearning
LLM-Merging: Building LLMs Efficiently through Merging
[NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging
WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning method which reduces LLM performance on WMDP while retaining …
[ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization
Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"
"Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" by Chongyu Fan*, Jiancheng Liu*, Licong Lin*, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu
RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024
[ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"
Function Vectors in Large Language Models (ICLR 2024)
A resource repository for machine unlearning in large language models
Code for paper: DivideMix: Learning with Noisy Labels as Semi-supervised Learning
[NeurIPS 2019] Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
全中文注释.(The loss function of retinanet based on pytorch).(You can use it on one-stage detection task or classifical task, to solve data imbalance influence).用于one-stage目标检测算法,提升检测效果.你也可以在分类任务中使用该损失函…
[ICLR 2023] The official code for our ICLR 2023 (top25%) paper: "Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning"
[ICML 2024] SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning
Official PyTorch implementation of the ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models
AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights (ICLR 2021)
[NeurIPS 2022] The official code for our NeurIPS 2022 paper "Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnable Classifier at the End of Deep Neural Network?".
Implementation of Batch Normalization in Numpy and Reproduction of results on MNIST
This repository is the official Pytorch implementation of Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition (NeurIPS 2022).
Official code for "Complementary Experts for Long-tailed Semi-Supervised Learning" (AAAI'2024)
Code of the CDMAD: Class-Distribution-Mismatch-Aware Debiasing for Class-Imbalanced Semi-Supervised Learning (2024 CVPR accepted paper)
😎 An up-to-date & curated list of awesome semi-supervised learning papers, methods & resources.