Stars
A machine learning compiler for GPUs, CPUs, and ML accelerators
A configurable, tunable, and reproducible library for CTR prediction https://fuxictr.github.io
推荐/广告/搜索领域工业界经典以及最前沿论文集合。A collection of industry classics and cutting-edge papers in the field of recommendation/advertising/search.
Code for ACM RecSys 2023 paper "Turning Dross Into Gold Loss: Is BERT4Rec really better than SASRec?"
Best Practices on Recommendation Systems
《Machine Learning Systems: Design and Implementation》- Chinese Version
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
A generative speech model for daily dialogue.
End-to-end Training for Multimodal Recommendation Systems
Repository hosting code for "Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations" (https://arxiv.org/abs/2402.17152).
The official GitHub page for the review paper "Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models".
Evidently is an open-source ML and LLM observability framework. Evaluate, test, and monitor any AI-powered system or data pipeline. From tabular data to Gen AI. 100+ metrics.
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
Collaborative Training of Large Language Models in an Efficient Way
Secrets of RLHF in Large Language Models Part I: PPO
本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)
Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合
DeepRec is a high-performance recommendation deep learning framework based on TensorFlow. It is hosted in incubation in LF AI & Data Foundation.
🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
alibaba / Megatron-LLaMA
Forked from NVIDIA/Megatron-LMBest practice for training LLaMA models in Megatron-LM
Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of …
A framework for large scale recommendation algorithms.
Retrieval and Retrieval-augmented LLMs
Paper List of Pre-trained Foundation Recommender Models
Large Language Model-enhanced Recommender System Papers
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
Code for fintune ChatGLM-6b using low-rank adaptation (LoRA)