- Attention Is All You Need
- GPT: Improving Language Understanding by Generative Pre-Training
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- GPT-2: Language Models are Unsupervised Multitask Learners
- Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
- XLNet: Generalized Autoregressive Pretraining for Language Understanding
- XLM: Cross-lingual Language Model Pretraining
- RoBERTa: Robustly Optimized BERT Pretraining Approach
- DistilBERT: DistilBERT, a distilled version of BERT: smaller, faster, cheaper, and lighter
- CTRL: A Conditional Transformer Language Model for Controllable Generation
- CamemBERT: a Tasty French Language Model
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
- T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
- XLM-RoBERTa: Unsupervised Cross-lingual Representation Learning at Scale
- MMBT: Supervised Multimodal Bitransformers for Classifying Images and Text
- FlauBERT: Unsupervised Language Model Pre-training for French
- BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
- ELECTRA: Pre-training text encoders as discriminators rather than generators
- DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation
- Reformer: The Efficient Transformer
- Longformer: The Long-Document Transformer
- GPT3: Language Models are Few-Shot Learners
- Big Bird: Transformers for Longer Sequences
- SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
- Generating Sentences from a Continuous Space
- Toward Controlled Generation of Text
- Adversarially Regularized Autoencoders
- A Neural Conversational Model
- A Knowledge-Grounded Neural Conversation Model
- Neural Approaches to Conversational AI
- Wizard of Wikipedia: Knowledge-Powered Conversational agents
- The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents
- SQuAD: 100,000+ Questions for Machine Comprehension of Text
- Reading Wikipedia to Answer Open-Domain Questions
- A Survey on Recent Advances in Named Entity Recognition from Deep Learning models
- A Survey on Deep Learning for Named Entity Recognition
- Named Entity Recognition With Parallel Recurrent Neural Networks
- Evaluating the Utility of Hand-crafted Features in Sequence Labelling
- Fast and Accurate Entity Recognition with Iterated Dilated Convolutions
- Neural Adaptation Layers for Cross-domain Named Entity Recognition
- Neural Architectures for Named Entity Recognition
-
Sentiment analysis using deep learning approaches: an overview
-
Sentiment analysis using deep learning architectures: a review
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link
- add paper name and link