Skip to content

Framework for training dependency parsing models.

License

Notifications You must be signed in to change notification settings

xuxinwen999/BertForDeprel

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tutorial End-to-End

Google colab showing how to use this parser are available here :

  • naija spoken training from pre-trained english model : link
  • training from scratch on naija spoken : link
  • training from scratch on written english : link
  • mock colab for testing if everything is fine : link

Prepare Dataset

Create a folder with the following structure :

|- [NAME_FOLDER]/
|   |- conllus/
|       | - <train.conllu>
|       | - <test.conllu>

where <train.conllu> and <test.conllu> are respectively the train and test datasets. They can have the name you want as you will have to indicate the path to this file in the running script.

Compute the annotation schema

The annotation schema is the set of dependency relation (deprel) and part-of-speeches (upos/pos) that will be required for the model to know the size of the classifiers layers. Once a model is trained on a given annotation schema, it is required to use the same annotation schema for inference and fine-tuning.

For computing this annotation schema, run the script <root_repo>/BertForDeprel/preprocessing/1_compute_annotation_schema.py with the parameter -i --input_folder linking to the train folder and the -o --output_path parameter linking to the location of the annotation schema to be writter.

After preprocessing the annotation schema, the structure of the project folder should be:

|- [NAME_FOLDER]/
|   |- conllus/
|       | - <train.conllu>
|       | - <test.conllu>
|   |- <annotation_schema.json>

Training models

From scratch

To train a model from scratch, you can, from the BertForDeprel/ folder, run the following command :

python run.py train --folder ../test/test_folder/ --model mode_name.pt --bert_type bert-base-multilingual-cased --ftrain ../test/test_folder/conllus/train.conll

where --folder indicate the path to the project folder, --model the name of the model to be trained, --ftrain the path to the train conll. If the optionnal parameter --ftest is passed, the corresponding file will be used for test. Otherwise, the model will automatically split the train dataset in --split_ratio with a random seed of --random_seed.

From pretrained model

WARNING : when training from a pretrained model, be sure to use the same annotation_schema.json for fine-tuning that the one that was used for pretraining. It would break the training otherwise.

To fine-tune a pre-trained model, you can, from BertForDeprel/ folder, run the following command :

python run.py train --folder ../test/test_folder/ --model mode_name.pt --fpretrain <path/to/pretrain/model.pt> --ftrain ../test/test_folder/conllus/train.conll

GPU/CPU training

Run on a single GPU

For running the training on a single GPU of id 0, add the parameter --gpu_ids 0. Respectively, for running on one single gpu of id 3, add the parameter ``--gpu_ids 3`

Run on multiples GPUs

For running the training on multiple GPU of ids 0 and 1, add the parameter --gpu_ids 0,1

Run on all available GPUs

For running the training on all available GPUs, add the parameter --gpu_ids "-2"

Run on CPU

For training on CPU only, add the parameter --gpu_ids "-1"

Pretrained Models

You can find on this Gdrive repo all the pretrained models, google colab script for training and publicly available treebanks (.conllu files).

Among others, here are the most important pretrained models :

Major TODOs

  • Implement the model.
  • Train a model from scratch on naija
  • Fine-tune a model on naija pretrain from scratch on english
  • Enable process based distributed training. Similar to (https://github.com/fastai/imagenet-fast/).
  • Implementing mixed precision (fp16) for faster training (see this link from pytorch doc)
  • Model optimization (model export, model pruning etc.)
  • Add feats and glose prediction

About

Framework for training dependency parsing models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.5%
  • Shell 1.5%