-
Chung-Ang univ@IIPL
- Seoul, South Korea
-
07:00
(UTC +09:00) - https://herbwood.tistory.com/
Stars
[ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"
Code for the paper "Data Attribution for Text-to-Image Models by Unlearning Synthesized Images."
LangChain, LangGraph Open Tutorial for everyone!
State-of-the-art Parameter-Efficient MoE Fine-tuning Method
Open Source Deep Research Alternative to Reason and Search on Private Data. Written in Python.
Low rank adaptation for segmentation anything model (SAM)
Learning without Forgetting for Vision-Language Models (TPAMI 2025)
This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)
[NAACL 2025 Main] Official Implementation of MLLMU-Bench
A resource repository for machine unlearning in large language models
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).
The official implementation of "Segment Anything with Multiple Modalities".
[ECCV 2024 Oral] The official implementation of "CAT-SAM: Conditional Tuning for Few-Shot Adaptation of Segment Anything Model".
The official code of the paper "A Closer Look at Machine Unlearning for Large Language Models".
Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
Segment Your Ring (SYR) - Segment Anything model adapted with LoRA to segment rings.
[CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"
Papers about training data quality management for ML models.
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
[EMNLP 2024] "Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective"
[NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models
Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".
[CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding
[CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"