Starred repositories
This repo is an exploratory experiment to enable frozen pretrained RWKV language models to accept speech modality input. We followed the idea of SLAM_ASR and used the RWKV language model as the LLM…
The libmamba based solver for conda.
A simple Python Pydantic model for Honkai: Star Rail parsed data from the Mihomo API.
A highly customizable, full scale web backend for web-rwkv, built on axum with websocket protocol.
Implementation of the RWKV language model in pure WebGPU/Rust.
RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!
hanlinxuy / RWKV-LM
Forked from BlinkDL/RWKV-LMRWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实现更好的智能效果。
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
diannaojiang / wenda
Forked from wenda-LLM/wenda闻达:一个大型语言模型调用平台。目前支持chatGLM-6B、chatRWKV、chatYuan和chatGLM-6B模型下的chatPDF(自建知识库查找)
闻达:一个LLM调用平台。目标为针对特定环境的高效内容生成,同时考虑个人和中小企业的计算资源局限性,以及知识安全和私密性问题
为ChatGLM设计的微调数据集生成工具,速来制作自己的猫娘。
人工精调的中文对话数据集和一段chatglm的微调代码
Blealtan / RWKV-LM-LoRA
Forked from BlinkDL/RWKV-LMRWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, …