Lists (6)
Sort Name ascending (A-Z)
Starred repositories
⚡ Flash Diffusion ⚡: Accelerating Any Conditional Diffusion Model for Few Steps Image Generation (AAAI 2025)
Official code for the paper: InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews (previously: Do Role-Playing Chatbots Capture the Character Persona…
Awesome-llm-role-playing-with-persona: a curated list of resources for large language models for role-playing with assigned personas
Chat凉宫春日, An open sourced Role-Playing chatbot Cheng Li, Ziang Leng, and others.
[NAACL Findings 2024] PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits
[EMNLP-2023] Official Codes for “Can ChatGPT Assess Human Personalities? A General Evaluation Framework”
Code and Data for the paper "Evaluating Character Understanding of Large Language Models via Character Profiling from Fictional Works".
An open source implementation of Mamba 2 in one file of pytorch
LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
PyTorch implementation of multi-task learning architectures, incl. MTI-Net (ECCV2020).
The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning
A PyTorch Library for Multi-Task Learning
A list of papers, codes and applications on multi-task learning.
[CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"
The open source implementation of Gemini, the model that will "eclipse ChatGPT" by Google
USTCKAY / pytorch_scatter
Forked from rusty1s/pytorch_scatterPyTorch Extension Library of Optimized Scatter Operations
OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models
Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This repository contains the code for the experiments in the paper.
MU-LLaMA: Music Understanding Large Language Model
Implementation of some unbalanced loss like focal_loss, dice_loss, DSC Loss, GHM Loss et.al
Implementation of Google's USM speech model in Pytorch
Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
Implementation of the convolutional module from the Conformer paper, for use in Transformers
[Unofficial] PyTorch implementation of "Conformer: Convolution-augmented Transformer for Speech Recognition" (INTERSPEECH 2020)
DIAMOND (DIffusion As a Model Of eNvironment Dreams) is a reinforcement learning agent trained in a diffusion world model. NeurIPS 2024 Spotlight.