Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
A generative speech model for daily dialogue.
Instant voice cloning by MIT and MyShell. Audio foundation model.
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
Universal LLM Deployment Engine with ML Compilation
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Qlib is an AI-oriented quantitative investment platform that aims to realize the potential, empower research, and create value using AI technologies in quantitative investment, from exploring ideas…
Chat with your database or your datalake (SQL, CSV, parquet). PandasAI makes data analysis conversational using LLMs and RAG.
Open deep learning compiler stack for cpu, gpu and specialized accelerators
EmotiVoice 😊: a Multi-Voice and Prompt-Controlled TTS Engine
[EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
This repo is a pipeline of VITS finetuning for fast speaker adaptation TTS, and many-to-many voice conversion
Find your trading edge, using the fastest engine for backtesting, algorithmic trading, and research.
Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集
VideoSys: An easy and efficient system for video generation
PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation
xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism
RAG for Local LLM, chat with PDF/doc/txt files, ChatPDF. 纯原生实现RAG功能,基于本地LLM、embedding模型、reranker模型实现,无须安装任何第三方agent库。
原理解析及代码实战,推荐算法也可以很简单 🔥 想要系统的学习推荐算法的小伙伴,欢迎 Star 或者 Fork 到自己仓库进行学习🚀 有任何疑问欢迎提 Issues,也可加文末的联系方式向我询问!
A pipeline parallel training script for diffusion models.
Code for Fast Training of Diffusion Models with Masked Transformers