Skip to content

Tags: IBM/pytorch-seq2seq

Tags

0.1.6

Toggle 0.1.6's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Release 0.1.6 (#137)

* Modified parameter order of DecoderRNN.forward (#85)

* Updated TopKDecoder (#86)

* Fixed topk decoder.

* Use torchtext from pipy (#87)

* Use torchtext from pipe.

* Fixed torch text sorting order.

* attention is not required when only using teacher forcing in decoder (#90)

* attention is not required when only using teacher forcing in decoder

* Updated docs and version.

* Fixed code style.

* bugfix (#92)

Fixed field arguments validation.

* Removed `initial_lr` when resuming optimizer with scheduler. (#95)

* shuffle the training data (#97)

* 0.1.5 (#91)

* Modified parameter order of DecoderRNN.forward (#85)

* Updated TopKDecoder (#86)

* Fixed topk decoder.

* Use torchtext from pipy (#87)

* Use torchtext from pipe.

* Fixed torch text sorting order.

* attention is not required when only using teacher forcing in decoder (#90)

* attention is not required when only using teacher forcing in decoder

* Updated docs and version.

* Fixed code style.

* shuffle the training data

* fix example of inflate function in TopKDecoer.py (#98)

* fix example of inflate function in TopKDecoer.py

* Fix hidden_layer size for one-directional decoder (#99)

* Fix hidden_layer size for one-directional decoder

Hidden layer size of the decoder was given `hidden_size * 2 if bidirectional else 1`, resulting in a dimensionality error for non-bidirectional decoders.
Changed `1` to `hidden_size`.

* Adapt load to allow CPU loading of GPU models (#100)

* Adapt load to allow CPU loading of GPU models

Add storage parameter to torch.load to allow loading
models on a CPU that are trained on the GPU, depending
on availability of cuda.

* Fix wrong parameter use on DecoderRNN (#103)

* Fix wrong parameter use on DecoderRNN

* Upgrade to pytorch-0.3.0 (#111)

* Upgrade to pytorch-0.3.0

* Use pytorch 3.0 in travis env.

* Make sure tensor contiguous when attention's not used. (#112)

* Implementing the predict_n method. Using the beam search outputs it returns several seqs for a given seq (#116)

* Adding a predictor method to return n predicted seqs for a src_seq input
(intended to be used along to Beam Search using TopKDecoder)

* Checkpoint after batches not epochs (#119)

* Pytorch 0.4 (#134)

* add contiguous call to tensor (#127)

when attention is turned off, pytorch (well, 0.4 at least) gets angry about calling view on a non-contiguous tensor

* Fixed shape documentation (#131)

* Update to pytorch-0.4

* Remove pytorch manual install in travis.

* Allow using pre-trained embedding (#135)

* updated docs