- Shanghai
-
21:10
(UTC +08:00)
Lists (2)
Sort Name ascending (A-Z)
Stars
Finetune Llama 3.3, Mistral, Phi-4, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
Use PEFT or Full-parameter to finetune 400+ LLMs (Qwen2.5, InternLM3, GLM4, Llama3.3, Mistral, Yi1.5, Baichuan2, DeepSeek3, ...) and 150+ MLLMs (Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, Inter…
DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including CUDA, x86 and ARMv9.
The pytest framework makes it easy to write small tests, yet scales to support complex functional testing
PyMuPDF is a high performance Python library for data extraction, analysis, conversion & manipulation of PDF (and other) documents.
High performance Transformer implementation in C++.
Bringing BERT into modernity via both architecture changes and scaling
Minimalistic 4D-parallelism distributed training framework for education purpose
🍎Transform an SVG icon into multiple themes, and generate React icons,Vue icons,svg icons
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU su…
SGLang is a fast serving framework for large language models and vision language models.
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficie…
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
Minimalistic large language model 3D-parallelism training
Transformer related optimization, including BERT, GPT
FastAPI framework, high performance, easy to learn, fast to code, ready for production
A high-throughput and memory-efficient inference and serving engine for LLMs
text to image to generation: CogView3-Plus and CogView3(ECCV 2024)
Data validation using Python type hints
High-resolution models for human tasks.
🎇Composition for Information Security Red Team🎇