Model | Institution | Paper | Github | Demo |
---|---|---|---|---|
chatGPT | OpenAI | |||
ChatGLM | 清华&智谱AI | https://github.com/THUDM/ChatGLM-6B | https://chatglm.cn/ | |
LLaMA | Meta | LLaMA: Open and Efficient Foundation Language Models | https://github.com/facebookresearch/llama | |
PaLM | PaLM: Scaling Language Modeling with Pathways | https://github.com/lucidrains/PaLM-pytorch | ||
Claude | Anthropic | https://www.anthropic.com/index/introducing-claude | ||
星火认知大模型 | 科大讯飞 | https://xinghuo.xfyun.cn/ | ||
BELLE-13B | 链家 | https://github.com/LianjiaTech/BELLE | ||
MOSS-16B | 复旦 | https://github.com/OpenLMLab/MOSS | ||
Vicuna-13B | https://github.com/lm-sys/FastChat | https://chat.lmsys.org/ | ||
Alpaca-13B | https://github.com/tatsu-lab/stanford_alpaca | https://alpaca-ai.ngrok.io/ |
中文通用大模型综合性基准SuperCLUE
Chatbot Arena: Benchmarking LLMs in the Wild with Elo Ratings
Multimodality Chatbot Arena
Open LLM Leaderboard
[1] Flamingo: a Visual Language Model for Few-Shot Learning
[2] Palm: Scaling language modeling with pathways
[3] Pali: A jointly-scaled multilingual language-image model
[4] BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
[5] Multimodal chain-of-thought reasoning in language models
[6] Llama: Open and efficient foundation language models
[7] Language is not all you need: Aligning perception with language models
[8] Visual chatgpt: Talking, drawing and editing with visual foundation models
[9] Mm-react: Prompting chatgpt for multimodal reasoning and action
[10] Visual instruction tuning
[11] Minigpt-4: Enhancing vision-language understanding with advanced large language models
[12] InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language
[13] VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks