-
Microsoft
Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Making large AI models cheaper, faster and more accessible
The simplest, fastest repository for training/finetuning medium-sized GPTs.
LlamaIndex is the leading framework for building LLM-powered agents over your data.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
A high-throughput and memory-efficient inference and serving engine for LLMs
A toolkit for developing and comparing reinforcement learning algorithms.
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
DSPy: The framework for programming—not prompting—language models
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Universal LLM Deployment Engine with ML Compilation
Use ChatGPT to summarize the arXiv papers. 全流程加速科研,利用chatgpt进行论文全文总结+专业翻译+润色+审稿+审稿回复
Fast and memory-efficient exact attention
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
Python package built to ease deep learning on graph, on top of existing DL frameworks.
A computer algebra system written in pure Python
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RN…
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)