Skip to content

Inductive graph convolutional neural networks for few-shot learning text classification.

Notifications You must be signed in to change notification settings

UCI-NLP-course-team/graphcnn-few-shot-text-classification

Repository files navigation

Graph Convolutional Neural Networks for Few-shot Learning Text Classification

Graph Neural Network (GNN) is gradually getting popular in recent years and actually achieves good performance on medical/chemistry field, whose data have strong graphical structures. Therefore, in this work, we propose a graph based method to solve classification and few-shot learning problems on text. Traditionally, the text data are usually modeled as a sequence of words, where the correlation is modeled sequentially. Although prevailing structures like LSTM and Transformer help the model summarize the long-term information, it's still hard for these model to learn the correlation between any pair of components within the documents, which limit the model performance on document understanding. In this case, GNN can tackle the problem without pain, so Graph Convolutional Network (GCN), the most applicable type of GNN, comes into our view for document classification. Besides, based on GraphSAGE, a variant of GCN, we even do more to learn the representation of the document and further conduct few-shot learning to use the trained model to classify samples based on representation similarity. Our experiment showed that these graph based models has good performance and great potential on text classification and few-shot learning tasks.

An example of a graph built for the convolutional network. Solid edges are document-word edges weighted based on TF-IDFs and dashed edges are word-word edges weighted based on Pointwise Mutual Information.

About

Inductive graph convolutional neural networks for few-shot learning text classification.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages