-
SCUT
- Guangzhou
prompt
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification. https://aclanthology.org/2021.acl-long.381/
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"
[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240
Must-read papers on prompt-based tuning for pre-trained language models.
This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
Toolkit for creating, sharing and using natural language prompts.
An Open-Source Framework for Prompt-Learning.
Original Implementation of Prompt Tuning from Lester, et al, 2021
On Transferability of Prompt Tuning for Natural Language Processing
[ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723
[ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
(Accepted By EMNLP2021 main long)TransPrompt: Towards an Automatic Transferable Prompting Framework for Few-shot Text Classification
ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Models