Stars
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
SGLang is a fast serving framework for large language models and vision language models.
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
An Open-source Framework for Data-centric, Self-evolving Autonomous Language Agents
A high-throughput and memory-efficient inference and serving engine for LLMs
Agent framework and applications built upon Qwen>=2.0, featuring Function Calling, Code Interpreter, RAG, and Chrome extension.
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
中文大模型能力评测榜单:目前已囊括164个大模型,覆盖chatgpt、gpt-4o、谷歌gemini、Claude3.5、百度文心一言、千问、百川、讯飞星火、商汤senseChat、minimax等商用模型, 以及deepseek-v3、qwen2.5、llama3.3、phi-4、glm4、书生internLM2.5等开源大模型。不仅提供能力评分排行榜,也提供所有模型的原始输出结果!
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
Retrieval and Retrieval-augmented LLMs
Source code of "Reasons to Reject? Aligning Language Models with Judgments"
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.
Secrets of RLHF in Large Language Models Part I: PPO
Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audi…
[EMNLP'24] CharacterGLM: Customizing Chinese Conversational AI Characters with Large Language Models
Large Language Model Text Generation Inference
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
骆驼(Luotuo): Open Sourced Chinese Language Models. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技
DSAC-v2; DSAC-T; DASC; Distributional Soft Actor-Critic
A large-scale, fine-grained, diverse preference dataset (and models).
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)