-
Southeast University
- CN, Jiangsu, Nanjing
Highlights
- Pro
Lists (1)
Sort Name ascending (A-Z)
Stars
High-performance retrieval engine for unstructured data
[ICLR 2025] Simple is Effective: The Roles of Graphs and Large Language Models in Knowledge-Graph-Based Retrieval-Augmented Generation
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
Awesome-RAG: Collect typical RAG papers and systems.
GNN-RAG: Graph Neural Retrieval for Large Language Modeling Reasoning
HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance
HuixiangDou2: A Robustly Optimized GraphRAG Approach
🦉 OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation
SimpleQuestions code corresponding to publication at https://arxiv.org/abs/1604.00727
Awesome-GraphRAG: A curated list of resources (surveys, papers, benchmarks, and opensource projects) on graph-based retrieval-augmented generation.
A Toolbox for MultiModal Recommendation. Integrating 10+ Models...
Pytorch implementation for "Bootstrap Latent Representations for Multi-modal Recommendation"-WWW'23
Convert PDF to markdown + JSON quickly with high accuracy
A high-quality tool for convert PDF to Markdown and JSON.一站式开源高质量数据提取工具,将PDF转换成Markdown和JSON格式。
Python tool for converting files and office documents to Markdown.
The official implementation of Self-Play Fine-Tuning (SPIN)
The implementation project with Pytorch for the paper Can LLMs be Good Graph Judger for Knowledge Graph Construction?
🦛 CHONK your texts with Chonkie ✨ - The no-nonsense RAG chunking library
Empowering RAG with a memory-based data interface for all-purpose applications!
Docs2KG: A Human-LLM Collaborative Approach to Unified Knowledge Graph Construction from Heterogeneous Documents
The docker-compose files for setting up a SearXNG instance with docker.
LLM API 管理 & 分发系统,支持 OpenAI、Azure、Anthropic Claude、Google Gemini、DeepSeek、字节豆包、ChatGLM、文心一言、讯飞星火、通义千问、360 智脑、腾讯混元等主流模型,统一 API 适配,可用于 key 管理与二次分发。单可执行文件,提供 Docker 镜像,一键部署,开箱即用。LLM API management & k…
A modular graph-based Retrieval-Augmented Generation (RAG) system
An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.