Skip to content

Experiments with paraphrase identification using sentence embeddings and linear classifiers

Notifications You must be signed in to change notification settings

marlovss/ParClassifier

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ParClassifier

Simple implementation of experiments for evaluating Paraphrase Detectors/Semantic Similarity Estimators based on linear classifires applied to vector sentence representations, as published in Souza and Sanches, 2018 (an earlier version in english language can be obtained at: http://www.inf.pucrs.br/linatural/wordpress/wp-content/uploads/2018/09/123450040.pdf)).

Usage

This code is written for python 3.7. To download, clone this repository:

git clone https://github.com/marlovss/ParClassifier.git

The experiment was divide into two main parts: computing sentence representations (computeRep.py) and evaluate these representations for paraphrase classification and semantic similarity prediction (evaluate.py)

Computing Sentence Representations

The computation of sentence representation is performed by the script computeRep.py. This script optionally takes as input a configuration file (Example: rep.cfg) which defines experiment parameters.

Usage:

the configuration file defines paths to the input, as well as the pre-trained models for different word representations (word embeddings, skip-thought model, Elmo model etc) and path for outputs, as well as which representations should be computed in the experiment.

Obtaining the input data

The input data should be compatible with ASSIN corpus, e.g. [http://nilc.icmc.usp.br/assin/] and [https://sites.google.com/view/assin2/].

Word Representations and language models

To compute the representations, the script uses:

Evaluating Sentence Representations

The evaluation of sentence representations for paraphrase classificatio nand semantic similarity estimation is performed by the script evaluate.py and evaluation metric are printed on the terminal.

Usage:

The parameters define:

  • input (default: "data/vectors") : Path to the directory containing the text files
  • train (default: "train"): Path to the directory containing the train data files
  • test (default: "test"):Path to the directory containing the test data files
  • oversample (default: "none"): oversampling method to be used (possible "none","random", "smote" and "adasyn")
  • total (default: False): Whether the classifier should test a dataset with combined representations (memory expensive and low performance).
  • sim (default: False):If True evaluates Sentence Similarity Estimation, otherwise evaluates Paraphrase Classification

Dependencies

All the dependencies are listed in the requirements.txt file. They can be installed with pip as follows:

pip3 install -r requirements.txt

About

Experiments with paraphrase identification using sentence embeddings and linear classifiers

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages