![database logo](https://raw.githubusercontent.com/github/explore/13295c57999765ac9ffa3281942a72ab08b79de2/topics/database/database.png)
-
Bytedance Inc
- liankeqin.com
- in/lianke-qin-663a36105
- All languages
- Assembly
- Bicep
- C
- C#
- C++
- CMake
- CSS
- Common Lisp
- Cuda
- D
- Dockerfile
- Erlang
- F#
- GCC Machine Description
- Go
- HTML
- Handlebars
- JSON
- Java
- JavaScript
- JetBrains MPS
- Jinja
- Jupyter Notebook
- Kotlin
- LLVM
- Lean
- Lua
- MATLAB
- Makefile
- Markdown
- Mathematica
- PHP
- Perl
- PureBasic
- Python
- R
- Ruby
- Rust
- SCSS
- Scala
- ShaderLab
- Shell
- Solidity
- Swift
- SystemVerilog
- TSQL
- TeX
- TypeScript
- Verilog
- Vim Script
- WebAssembly
Starred repositories
VideoSys: An easy and efficient system for video generation
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering
Development repository for the Triton language and compiler
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficie…
Fast and memory-efficient exact attention
DeepSeek LLM: Let there be answers
Transformer related optimization, including BERT, GPT
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Running large language models on a single GPU for throughput-oriented scenarios.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Sparsity-aware deep learning inference runtime for CPUs
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
FlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Large Language Model Text Generation Inference
A high-throughput and memory-efficient inference and serving engine for LLMs
Beringei is a high performance, in-memory storage engine for time series data.
User-friendly Desktop Client App for AI Models/LLMs (GPT, Claude, Gemini, Ollama...)
RISC Zero is a zero-knowledge verifiable general computing platform based on zk-STARKs and the RISC-V microarchitecture.
A curated list of insanely awesome libraries, packages and resources for Quants (Quantitative Finance)
Using Low-rank adaptation to quickly fine-tune diffusion models.