-
华南师范大学,South China Normal University
- China,Guzhangzhou
Stars
The official Github repository for paper "MileCut: A Multi-view Truncation Framework for Legal Case Retrieval"(WWW'24).
The official GitHub page for the survey paper "Towards Next-Generation LLM-based Recommender Systems: A Survey and Beyond". And this paper is under review.
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality synthetic data generation pipeline!
[ICML 2024] Official implementation of: "Revitalizing Multivariate Time Series Forecasting: Learnable Decomposition with Inter-Series Dependencies and Intra-Series Variations Modeling".
High-quality datasets, tools, and concepts for LLM fine-tuning.
This repository contains a reading list of papers on Time Series Forecasting/Prediction (TSF) and Spatio-Temporal Forecasting/Prediction (STF). These papers are mainly categorized according to the …
Official implementation of SAMformer, a transformer leveraging Sharpness-Aware Minimization and Channel-Wise Attention for Time Series Forecasting.
GAIIC赛道一:影像学 NLP — 医学影像诊断报告生成 [A100换你大棚甜瓜 Rank-12 方案]
[SIGIR 2024] The official repo for paper "Planning Ahead in Generative Retrieval: Guiding Autoregressive Generation through Simultaneous Decoding"
[WWW 2024] The official repo for paper "Scalable and Effective Generative Information Retrieval".
Implementation of the paper by Google, Transformer Memory As A Differentiable Search Index
Pytorch module for computing Quadratic Weighted Kappa Loss
Implementation of EMNLP 2023 Findings: Improving Question Generation with Multi-level Content Planning
The codes for ACL2022 paper “CQG: A Simple and Effective Controlled Generation Framework for Multi-hop Question Generation
Multi-hop Question Generation with Graph Convolutional Network
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.
Generate question/answer training pairs out of raw text.
Tools for merging pretrained large language models.
用于从头预训练+SFT一个小参数量的中文LLaMa2的仓库;24G单卡即可运行得到一个具备简单中文问答能力的chat-llama2.
LAVIS - A One-stop Library for Language-Vision Intelligence
中文nlp解决方案(大模型、数据、模型、训练、推理)
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
中文 NLP 预处理、解析工具包,准确、高效、易用 A Chinese NLP Preprocessing & Parsing Package www.jionlp.com