-
PAIC
- Shanghai, China
-
14:21
(UTC -12:00) - [email protected]
Stars
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
A lightweight framework for building LLM-based agents
🦜🔗 Build context-aware reasoning applications
✨✨Latest Advances on Multimodal Large Language Models
LlamaIndex is the leading framework for building LLM-powered agents over your data.
Example models using DeepSpeed
Retrieval and Retrieval-augmented LLMs
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
microsoft / Megatron-DeepSpeed
Forked from NVIDIA/Megatron-LMOngoing research training transformer language models at scale, including: BERT & GPT-2
Ongoing research training transformer models at scale
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
Fast and memory-efficient exact attention
Making large AI models cheaper, faster and more accessible
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
A 13B large language model developed by Baichuan Intelligent Technology
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).
A high-throughput and memory-efficient inference and serving engine for LLMs
⚡ Dynamically generated stats for your github readmes
A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
This repository contains datasets and baselines for benchmarking Chinese text recognition.
OpenMMLab Detection Toolbox and Benchmark