-
University of Oxford
Highlights
- Pro
Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
The official gpt4free repository | various collection of powerful language models | o3 and deepseek r1, gpt-4.5
Platform to experiment with the AI Software Engineer. Terminal based. NOTE: Very different from https://gptengineer.app
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
A Gradio web UI for Large Language Models with support for multiple inference backends.
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Making large AI models cheaper, faster and more accessible
A high-throughput and memory-efficient inference and serving engine for LLMs
The simplest, fastest repository for training/finetuning medium-sized GPTs.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Finetune Llama 3.3, DeepSeek-R1 & Reasoning LLMs 2x faster with 70% less memory! 🦥
Instant voice cloning by MIT and MyShell. Audio foundation model.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
DSPy: The framework for programming—not prompting—language models
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Universal LLM Deployment Engine with ML Compilation