
- Houston
Stars
Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
AIInfra(AI 基础设施)指AI系统从底层芯片等硬件,到上层软件栈支持AI大模型训练和推理。
Initialize any web chat with your code
A list of awesome research on log analysis, anomaly detection, fault localization, and AIOps
Gurubase lets you add an "Ask AI" button to your technical docs, turning your content into an AI assistant. It uses web pages, PDFs, YouTube videos, and GitHub repos as sources to generate instant,…
gitpod-io / openvscode-server
Forked from microsoft/vscodeRun upstream VS Code on a remote machine with access through a modern web browser from any device, anywhere.
🤗 smolagents: a barebones library for agents that think in code.
SWE-agent takes a GitHub issue and tries to automatically fix it, using your LM of choice. It can also be employed for offensive cybersecurity or competitive coding challenges. [NeurIPS 2024]
Build effective agents using Model Context Protocol and simple workflow patterns
Build resilient language agents as graphs.
🔥 Turn entire websites into LLM-ready markdown or structured data. Scrape, crawl and extract with a single API.
Effortlessly run LLM backends, APIs, frontends, and services with one command.
stackblitz-labs / bolt.diy
Forked from stackblitz/bolt.newPrompt, run, edit, and deploy full-stack web applications using any LLM you want!
trypear / pearai-app
Forked from microsoft/vscodePearAI: Open Source AI Code Editor (Fork of VSCode). The PearAI Submodule (https://github.com/trypear/pearai-submodule) is a fork of Continue.
Attention is all you need implementation
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
GPU programming related news and material links
A high-performance LLM inference API and Chat UI that integrates DeepSeek R1's CoT reasoning traces with Anthropic Claude models.
The TypeScript AI agent framework. ⚡ Assistants, RAG, observability. Supports any LLM: GPT-4, Claude, Gemini, Llama.
这是一个简单的技术科普教程项目,主要聚焦于解释一些有趣的,前沿的技术概念和原理。每篇文章都力求在 5 分钟内阅读完成。
FlashInfer: Kernel Library for LLM Serving
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations
This repository contains the Hugging Face Agents Course.
FastGPT is a knowledge-based platform built on the LLMs, offers a comprehensive suite of out-of-the-box capabilities such as data processing, RAG retrieval, and visual AI workflow orchestration, le…