Stars
An Open-sourced Knowledgable Large Language Model Framework.
强化学习中文教程(蘑菇书🍄),在线阅读地址:https://datawhalechina.github.io/easy-rl/
Simple Reinforcement learning tutorials, 莫烦Python 中文AI教学
Build Your Own Bundle-A Neural Combinatorial Optimization Method (BYOB)
A Distribution-Free Test of Independence Based on Mean Variance Index.
SASRec: Self-Attentive Sequential Recommendation
Code for stock movement prediction from tweets and historical stock prices.
A comprehensive dataset for stock movement prediction from tweets and historical stock prices.
Democratizing Internet-scale financial data.
FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
the official implementation of the RecSys 2023 paper “Uncovering ChatGPT's Capabilities in Recommender Systems”
[ECIR'24] Implementation of "Large Language Models are Zero-Shot Rankers for Recommender Systems"
Code Repository for The Kaggle Book, Published by Packt Publishing
精选机器学习,NLP,图像识别, 深度学习等人工智能领域学习资料,搜索,推荐,广告系统架构及算法技术资料整理。算法大牛笔记汇总
MuCGEC中文纠错数据集及文本纠错SOTA模型开源;Code & Data for our NAACL 2022 Paper "MuCGEC: a Multi-Reference Multi-Source Evaluation Dataset for Chinese Grammatical Error Correction"
大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
骆驼(Luotuo): Open Sourced Chinese Language Models. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技
6th Solution for 2020-KDDCUP: Multi-Channel Retrieve and Sorting for Debiasing Recommender System
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Code and documentation to train Stanford's Alpaca models, and generate the data.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
人工精调的中文对话数据集和一段chatglm的微调代码
chatglm 6b finetuning and alpaca finetuning
Instruct-tune LLaMA on consumer hardware