![awesome logo](https://raw.githubusercontent.com/github/explore/80688e429a7d4ef2fca1e82350fe8e3517d3494d/topics/awesome/awesome.png)
Starred repositories
'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)
A generative speech model for daily dialogue.
大模型算法岗面试题(含答案):常见问题和概念解析 "大模型面试题"、"算法岗面试"、"面试常见问题"、"大模型算法面试"、"大模型应用基础"
[CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & V…
[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
This repository contains the source code for the paper First Order Motion Model for Image Animation
Lifespan Age Transformation Synthesis code
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
Llama3、Llama3.1 中文仓库(随书籍撰写中... 各种网友及厂商微调、魔改版本有趣权重 & 训练、推理、评测、部署教程视频 & 文档)
Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.
[ECCV 2022] Implementation of the paper "Locality Guidance for Improving Vision Transformers on Tiny Datasets"
[CVPR 2024 Highlight] Logit Standardization in Knowledge Distillation
Overview and tutorial of the LangChain Library
😎 Awesome list of tools and projects with the awesome LangChain framework
PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)
The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillation-Oriented Trainer https://openaccess.thecvf.com/content…
CVPR 2023, Class Attention Transfer Based Knowledge Distillation
This is a curated list of "Embodied AI or robot with Large Language Models" research. Watch this repository for the latest updates! 🔥
offical code for: Semi-supervised Pathological Image Segmentation via Cross Distillation of Multiple Attentions. MICCAI 2023.
Semi Supervised Learning for Medical Image Segmentation, a collection of literature reviews and code implementations.
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)
Improving Convolutional Networks via Attention Transfer (ICLR 2017)