Stars
Evaluating tool-augmented LLMs in conversation settings
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models
Summarize existing representative LLMs text datasets.
MNBVC(Massive Never-ending BT Vast Chinese corpus)超大规模中文语料集。对标chatGPT训练的40T数据。MNBVC数据集不但包括主流文化,也包括各个小众文化甚至火星文的数据。MNBVC数据集包括新闻、作文、小说、书籍、杂志、论文、台词、帖子、wiki、古诗、歌词、商品介绍、笑话、糗事、聊天记录等一切形式的纯文本中文数据。
Codes for our paper "RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation"
A modular graph-based Retrieval-Augmented Generation (RAG) system
心理健康大模型、LLM、The Big Model of Mental Health、Finetune、InternLM2、InternLM2.5、Qwen、ChatGLM、Baichuan、DeepSeek、Mixtral、LLama3、GLM4、Qwen2、LLama3.1
task-oriented dialogue system, especially for LLM, contain subtask: (1) intent-detection (2) slot filling (3) dialogue state tracking
[IJCAI 2024] Generate different roles for GPTs to form a collaborative entity for complex tasks.
We present a comprehensive and deep review of the HFM in challenges, opportunities, and future directions. The released paper: https://arxiv.org/abs/2404.03264
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
An Open-source Framework for Data-centric, Self-evolving Autonomous Language Agents
Large Language Model based Multi-Agents: A Survey of Progress and Challenges
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
Code examples and resources for DBRX, a large language model developed by Databricks
A simple tool to update bib entries with their official information (e.g., DBLP or the ACL anthology).
Stable Diffusion web UI
Awesome papers about generative Information Extraction (IE) using Large Language Models (LLMs)