Skip to content

double22a/openLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 

Repository files navigation

openLLM

LLM

Model Institution Paper Github Demo
chatGPT OpenAI
ChatGLM 清华&智谱AI https://github.com/THUDM/ChatGLM-6B https://chatglm.cn/
LLaMA Meta LLaMA: Open and Efficient Foundation Language Models https://github.com/facebookresearch/llama
PaLM Google PaLM: Scaling Language Modeling with Pathways https://github.com/lucidrains/PaLM-pytorch
Claude Anthropic https://www.anthropic.com/index/introducing-claude
星火认知大模型 科大讯飞 https://xinghuo.xfyun.cn/
BELLE-13B 链家 https://github.com/LianjiaTech/BELLE
MOSS-16B 复旦 https://github.com/OpenLMLab/MOSS
Vicuna-13B https://github.com/lm-sys/FastChat https://chat.lmsys.org/
Alpaca-13B https://github.com/tatsu-lab/stanford_alpaca https://alpaca-ai.ngrok.io/

Multimodal LLM

Model Institution Paper Github Demo
GPT4 OpenAI
LLaVa University of Wisconsin–Madison Visual Instruction Tuning https://github.com/haotian-liu/LLaVA https://llava.hliu.cc/
MiniGPT-4 MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models https://github.com/Vision-CAIR/MiniGPT-4 https://huggingface.co/spaces/Vision-CAIR/minigpt4
KOSMOS-1 Microsoft Language Is Not All You Need: Aligning Perception with Language Models https://github.com/microsoft/unilm
MM-REACT Microsoft MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action https://github.com/microsoft/MM-REACT https://huggingface.co/spaces/microsoft-cognitive-service/mm-react
Visual ChatGPT Microsoft Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models https://github.com/microsoft/visual-chatgpt
VisionLLM OpenGVLab VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks https://github.com/OpenGVLab/VisionLLM
InternGPT OpenGVLab InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language https://github.com/OpenGVLab/InternGPT https://igpt.opengvlab.com/
VisualGLM-6B 清华&智谱AI https://github.com/THUDM/VisualGLM-6B https://huggingface.co/spaces/lykeven/visualglm-6b
BLIP-2 Salesforce BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models https://github.com/salesforce/LAVIS/tree/main/projects/blip2
Flamingo DeepMind Flamingo: a Visual Language Model for Few-Shot Learning https://github.com/lucidrains/flamingo-pytorch

Benchmarking LLMs

中文通用大模型综合性基准SuperCLUE
Chatbot Arena: Benchmarking LLMs in the Wild with Elo Ratings
Multimodality Chatbot Arena
Open LLM Leaderboard

Github

Alpaca-CoT: An Instruction Fine-Tuning Platform with Instruction Data Collection and Unified Large Language Models Interface

Paper

[1] Flamingo: a Visual Language Model for Few-Shot Learning
[2] Palm: Scaling language modeling with pathways
[3] Pali: A jointly-scaled multilingual language-image model
[4] BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
[5] Multimodal chain-of-thought reasoning in language models
[6] Llama: Open and efficient foundation language models
[7] Language is not all you need: Aligning perception with language models
[8] Visual chatgpt: Talking, drawing and editing with visual foundation models
[9] Mm-react: Prompting chatgpt for multimodal reasoning and action
[10] Visual instruction tuning
[11] Minigpt-4: Enhancing vision-language understanding with advanced large language models
[12] InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language
[13] VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published