-
Tencent WeChat AI
- Beijing
Stars
Making large AI models cheaper, faster and more accessible
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.
Code for paper "Patch-Level Training for Large Language Models"
Code and Data for the ACL22 main conference paper "MSCTD: A Multimodal Sentiment Chat Translation Dataset"
EMNLP 2022: ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization
[NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li, Peng Fu, Yanan Cao, Weiping Wang, Jie Zhou
fandongmeng / RSI-NAT
Forked from ictnlp/RSI-NATSource code for "Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation"