Stars
Official Implementation of "GRIFFIN: Effective Token Alignment for Faster Speculative Decoding"
HArmonizedSS / HASS
Forked from SafeAILab/EAGLEOfficial Implementation of "Learning Harmonized Representations for Speculative Sampling" (HASS)
Bayesian optimisation & Reinforcement Learning library developed by Huawei Noah's Ark Lab
Interactive roadmaps, guides and other educational content to help developers grow in their careers.
Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton
Lightweight framework for building Agents with memory, knowledge, tools and reasoning.
Efficient LLM Inference Acceleration using Prompting
High accuracy RAG for answering questions from scientific documents with citations
🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)
⏰ Collaboratively track deadlines of conferences recommended by CCF (Website, Python Cli, Wechat Applet) / If you find it useful, please star this project, thanks~
✨✨Latest Advances on Multimodal Large Language Models
Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines
Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.
Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)
📰 Must-read papers and blogs on Speculative Decoding ⚡️
A curated list for Efficient Large Language Models
AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
Implementation for the paper: Representation Learning on Knowledge Graphs for Node Importance Estimation
Paper to Reviewer Assignment is a tedious but a very crucial job for conference organizers. Till date the Toronto Paper Matching System (TPMS) is a widely used tool to solve this problem but the re…
A list of awesome papers and resources of recommender system on large language model (LLM).
The hub for EleutherAI's work on interpretability and learning dynamics
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training