-
University of Oxford
Highlights
- Pro
Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
The official gpt4free repository | various collection of powerful language models
Platform to experiment with the AI Software Engineer. Terminal based. NOTE: Very different from https://gptengineer.app
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
A Gradio web UI for Large Language Models with support for multiple inference backends.
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Making large AI models cheaper, faster and more accessible
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
A high-throughput and memory-efficient inference and serving engine for LLMs
Instant voice cloning by MIT and MyShell. Audio foundation model.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
DSPy: The framework for programming—not prompting—language models
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Finetune Llama 3.3, Mistral, Phi-4, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Universal LLM Deployment Engine with ML Compilation