Stars
[🔥updating ...] AI 自动量化交易机器人(完全本地部署) AI-powered Quantitative Investment Research Platform. 📃 online docs: https://ufund-me.github.io/Qbot ✨ :news: qbot-mini: https://github.com/Charmve/iQuant
text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
Official implementation of MotionClone: Training-Free Motion Cloning for Controllable Video Generation
Code of Pyramidal Flow Matching for Efficient Video Generative Modeling
MiniCPM-V 2.6: A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone
【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
List of Dirty, Naughty, Obscene, and Otherwise Bad Words
Repository hosting code for "Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations" (https://arxiv.org/abs/2402.17152).
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
Make your vim more power and much easer. 最实用的vim配置🔥
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
a state-of-the-art-level open visual language model | 多模态预训练模型
纯c++的全平台llm加速库,支持python调用,chatglm-6B级模型单卡可达10000+token / s,支持glm, llama, moss基座,手机端流畅运行
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and…
Code for ALBEF: a new vision-language pre-training method
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
[TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
【PyTorch】Easy-to-use,Modular and Extendible package of deep-learning based CTR models.