Stars
MiniCPM3-4B: An edge-side LLM that surpasses GPT-3.5-Turbo.
A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
对招股书的财务勾稽关系校验,包含跨表勾稽、表内勾稽、文表勾稽多种校验,校验后在pdf文档中自动高亮显示。
TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.
Loki: Open-source solution designed to automate the process of verifying factuality
Building a quick conversation-based search demo with Lepton AI.
Modeling, training, eval, and inference code for OLMo
HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance
FinGLM: 致力于构建一个开放的、公益的、持久的金融大模型项目,利用开源开放来促进「AI+金融」。
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
BISHENG is an open LLM devops platform for next generation Enterprise AI applications. Powerful and comprehensive features include: GenAI workflow, RAG, Agent, Unified model management, Evaluation,…
Robust recipes to align language models with human and AI preferences
Connect and chat with your multiple documents (pdf and txt) through GPT 3.5, GPT-4 Turbo, Claude and Local Open-Source LLMs
DISC-FinLLM,中文金融大语言模型(LLM),旨在为用户提供金融场景下专业、智能、全面的金融咨询服务。DISC-FinLLM, a Chinese financial large language model (LLM) designed to provide users with professional, intelligent, and comprehensive financ…
A comprehensive guide to building RAG-based LLM applications for production.
ChatLaw:A Powerful LLM Tailored for Chinese Legal. 中文法律大模型
DSPy: The framework for programming—not prompting—foundation models
Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
A series of large language models developed by Baichuan Intelligent Technology
A real world full-stack application using LlamaIndex
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)