Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Langflow is a low-code app builder for RAG and multi-agent AI applications. It’s Python-based and agnostic to any model, API, or database.
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
Making large AI models cheaper, faster and more accessible
A high-throughput and memory-efficient inference and serving engine for LLMs
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
A generative speech model for daily dialogue.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Finetune Llama 3.3, DeepSeek-R1 & Reasoning LLMs 2x faster with 70% less memory
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
A high-quality tool for convert PDF to Markdown and JSON.一站式开源高质量数据提取工具,将PDF转换成Markdown和JSON格式。
The official Python library for the OpenAI API
A modular graph-based Retrieval-Augmented Generation (RAG) system
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Use ChatGPT to summarize the arXiv papers. 全流程加速科研,利用chatgpt进行论文全文总结+专业翻译+润色+审稿+审稿回复
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RN…
Train transformer language models with reinforcement learning.
The official GitHub page for the survey paper "A Survey of Large Language Models".
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.