%load_ext d2lbook.tab
tab.interact_select('mxnet', 'pytorch', 'tensorflow')
🏷️sec_seq2seq
As we have seen in :numref:sec_machine_translation
,
in machine translation
both the input and output are a variable-length sequence.
To address this type of problem,
we have designed a general encoder-decoder architecture
in :numref:sec_encoder-decoder
.
In this section,
we will
use two RNNs to design
the encoder and the decoder of
this architecture
and apply it to sequence to sequence learning
for machine translation
:cite:Sutskever.Vinyals.Le.2014,Cho.Van-Merrienboer.Gulcehre.ea.2014
.
Following the design principle
of the encoder-decoder architecture,
the RNN encoder can
take a variable-length sequence as input and transforms it into a fixed-shape hidden state.
In other words,
information of the input (source) sequence
is encoded in the hidden state of the RNN encoder.
To generate the output sequence token by token,
a separate RNN decoder
can predict the next token based on
what tokens have been seen (such as in language modeling) or generated,
together with the encoded information of the input sequence.
:numref:fig_seq2seq
illustrates
how to use two RNNs
for sequence to sequence learning
in machine translation.
In :numref:fig_seq2seq
,
the special "<eos>" token
marks the end of the sequence.
The model can stop making predictions
once this token is generated.
At the initial time step of the RNN decoder,
there are two special design decisions.
First, the special beginning-of-sequence "<bos>" token is an input.
Second,
the final hidden state of the RNN encoder is used
to initiate the hidden state of the decoder.
In designs such as :cite:Sutskever.Vinyals.Le.2014
,
this is exactly
how the encoded input sequence information
is fed into the decoder for generating the output (target) sequence.
In some other designs such as :cite:Cho.Van-Merrienboer.Gulcehre.ea.2014
,
the final hidden state of the encoder
is also fed into the decoder as
part of the inputs
at every time step as shown in :numref:fig_seq2seq
.
While the encoder input
is just tokens from the source sequence,
the decoder input and output
are not so straightforward
in encoder-decoder training.
A common approach is teacher forcing,
where the original target sequence (token labels)
is fed into the decoder as input.
More concretely,
the special beginning-of-sequence token
and the original target sequence excluding the final token
are concatenated as
input to the decoder,
while the decoder output (labels for training) is
the original target sequence,
shifted by one token:
"<bos>", "Ils", "regardent", "." fig_seq2seq
).
Our implementation in
:numref:subsec_loading-seq-fixed-len
prepared training data for teacher forcing,
where shifting tokens for self-supervised learning
is similar to the training of language models in
:numref:sec_language-model
.
An alternative approach is
to feed the predicted token
from the previous time step
as the current input to the decoder.
In the following,
we will explain the design of :numref:fig_seq2seq
in greater detail.
We will train this model for machine translation
on the English-French dataset as introduced in
:numref:sec_machine_translation
.
%%tab mxnet
import collections
from d2l import mxnet as d2l
import math
from mxnet import np, npx, init, gluon, autograd
from mxnet.gluon import nn, rnn
npx.set_np()
%%tab pytorch
import collections
from d2l import torch as d2l
import math
import torch
from torch import nn
from torch.nn import functional as F
%%tab tensorflow
import collections
from d2l import tensorflow as d2l
import math
import tensorflow as tf
Technically speaking,
the encoder transforms an input sequence of variable length into a fixed-shape context variable fig_seq2seq
,
we can use an RNN to design the encoder.
Let's consider a sequence example (batch size: 1).
Suppose that
the input sequence is
$$\mathbf{h}_t = f(\mathbf{x}t, \mathbf{h}{t-1}). $$
In general,
the encoder transforms the hidden states at
all the time steps
into the context variable through a customized function
For example, when choosing fig_seq2seq
,
the context variable is just the hidden state
So far we have used a unidirectional RNN to design the encoder, where a hidden state only depends on the input subsequence at and before the time step of the hidden state. We can also construct encoders using bidirectional RNNs. In this case, a hidden state depends on the subsequence before and after the time step (including the input at the current time step), which encodes the information of the entire sequence.
Now let's [implement the RNN encoder].
Note that we use an embedding layer
to obtain the feature vector for each token in the input sequence.
The weight
of an embedding layer
is a matrix
whose number of rows equals to the size of the input vocabulary (vocab_size
)
and number of columns equals to the feature vector's dimension (embed_size
).
For any input token index
%%tab mxnet
class Seq2SeqEncoder(d2l.Encoder): #@save
"""The RNN encoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(num_hiddens, num_layers, dropout)
self.initialize(init.Xavier())
def forward(self, X, *args):
# X shape: (batch_size, num_steps)
embs = self.embedding(d2l.transpose(X))
# embs shape: (num_steps, batch_size, embed_size)
output, state = self.rnn(embs)
# output shape: (num_steps, batch_size, num_hiddens)
# state shape: (num_layers, batch_size, num_hiddens)
return output, state
%%tab pytorch
def init_seq2seq(module): #@save
"""Initialize weights for Seq2Seq."""
if type(module) == nn.Linear:
nn.init.xavier_uniform_(module.weight)
if type(module) == nn.GRU:
for param in module._flat_weights_names:
if "weight" in param:
nn.init.xavier_uniform_(module._parameters[param])
class Seq2SeqEncoder(d2l.Encoder): #@save
"""The RNN encoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(embed_size, num_hiddens, num_layers, dropout)
self.apply(init_seq2seq)
def forward(self, X, *args):
# X shape: (batch_size, num_steps)
embs = self.embedding(d2l.astype(d2l.transpose(X), d2l.int64))
# embs shape: (num_steps, batch_size, embed_size)
output, state = self.rnn(embs)
# output shape: (num_steps, batch_size, num_hiddens)
# state shape: (num_layers, batch_size, num_hiddens)
return output, state
%%tab tensorflow
class Seq2SeqEncoder(d2l.Encoder): #@save
"""The RNN encoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = tf.keras.layers.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(num_hiddens, num_layers, dropout)
def call(self, X, *args):
# X shape: (batch_size, num_steps)
embs = self.embedding(d2l.transpose(X))
# embs shape: (num_steps, batch_size, embed_size)
output, state = self.rnn(embs)
# output shape: (num_steps, batch_size, num_hiddens)
# state shape: (num_layers, batch_size, num_hiddens)
return output, state
The returned variables of recurrent layers
have been explained in :numref:sec_rnn-concise
.
Let's still use a concrete example
to [illustrate the above encoder implementation.]
Below
we instantiate a two-layer GRU encoder
whose number of hidden units is 16.
Given
a minibatch of sequence inputs X
(batch size: 4, number of time steps: 9),
the hidden states of the last layer
at all the time steps
(outputs
return by the encoder's recurrent layers)
are a tensor
of shape
(number of time steps, batch size, number of hidden units).
%%tab all
vocab_size, embed_size, num_hiddens, num_layers = 10, 8, 16, 2
batch_size, num_steps = 4, 9
encoder = Seq2SeqEncoder(vocab_size, embed_size, num_hiddens, num_layers)
X = d2l.zeros((batch_size, num_steps))
outputs, state = encoder(X)
d2l.check_shape(outputs, (num_steps, batch_size, num_hiddens))
Since a GRU is employed here, the shape of the multilayer hidden states at the final time step is (number of hidden layers, batch size, number of hidden units).
%%tab all
if tab.selected('mxnet', 'pytorch'):
d2l.check_shape(state, (num_layers, batch_size, num_hiddens))
if tab.selected('tensorflow'):
d2l.check_len(state, num_layers)
d2l.check_shape(state[0], (batch_size, num_hiddens))
🏷️sec_seq2seq_decoder
As we just mentioned,
the context variable
To model this conditional probability on sequences,
we can use another RNN as the decoder.
At any time step
$$\mathbf{s}{t^\prime} = g(y{t^\prime-1}, \mathbf{c}, \mathbf{s}_{t^\prime-1}).$$
:eqlabel:eq_seq2seq_s_t
After obtaining the hidden state of the decoder,
we can use an output layer and the softmax operation to compute the conditional probability distribution
Following :numref:fig_seq2seq
,
when implementing the decoder as follows,
we directly use the hidden state at the final time step
of the encoder
to initialize the hidden state of the decoder.
This requires that the RNN encoder and the RNN decoder have the same number of layers and hidden units.
To further incorporate the encoded input sequence information,
the context variable is concatenated
with the decoder input at all the time steps.
To predict the probability distribution of the output token,
a fully connected layer is used to transform
the hidden state at the final layer of the RNN decoder.
%%tab mxnet
class Seq2SeqDecoder(d2l.Decoder):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(num_hiddens, num_layers, dropout)
self.dense = nn.Dense(vocab_size, flatten=False)
self.initialize(init.Xavier())
def init_state(self, enc_outputs, *args):
return enc_outputs[1]
def forward(self, X, enc_state):
# X shape: (batch_size, num_steps)
# embs shape: (num_steps, batch_size, embed_size)
embs = self.embedding(d2l.transpose(X))
# context shape: (batch_size, num_hiddens)
context = enc_state[-1]
# Broadcast context to (num_steps, batch_size, num_hiddens)
context = np.tile(context, (embs.shape[0], 1, 1))
# Concat at the feature dimension
embs_and_context = d2l.concat((embs, context), -1)
outputs, state = self.rnn(embs_and_context, enc_state)
outputs = d2l.swapaxes(self.dense(outputs), 0, 1)
# outputs shape: (batch_size, num_steps, vocab_size)
# state shape: (num_layers, batch_size, num_hiddens)
return outputs, state
%%tab pytorch
class Seq2SeqDecoder(d2l.Decoder):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(embed_size+num_hiddens, num_hiddens,
num_layers, dropout)
self.dense = nn.LazyLinear(vocab_size)
self.apply(init_seq2seq)
def init_state(self, enc_outputs, *args):
return enc_outputs[1]
def forward(self, X, enc_state):
# X shape: (batch_size, num_steps)
# embs shape: (num_steps, batch_size, embed_size)
embs = self.embedding(d2l.astype(d2l.transpose(X), d2l.int32))
# context shape: (batch_size, num_hiddens)
context = enc_state[-1]
# Broadcast context to (num_steps, batch_size, num_hiddens)
context = context.repeat(embs.shape[0], 1, 1)
# Concat at the feature dimension
embs_and_context = d2l.concat((embs, context), -1)
outputs, state = self.rnn(embs_and_context, enc_state)
outputs = d2l.swapaxes(self.dense(outputs), 0, 1)
# outputs shape: (batch_size, num_steps, vocab_size)
# state shape: (num_layers, batch_size, num_hiddens)
return outputs, state
%%tab tensorflow
class Seq2SeqDecoder(d2l.Decoder):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = tf.keras.layers.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(num_hiddens, num_layers, dropout)
self.dense = tf.keras.layers.Dense(vocab_size)
def init_state(self, enc_outputs, *args):
return enc_outputs[1]
def call(self, X, enc_state):
# X shape: (batch_size, num_steps)
# embs shape: (num_steps, batch_size, embed_size)
embs = self.embedding(d2l.transpose(X))
# context shape: (batch_size, num_hiddens)
context = enc_state[-1]
# Broadcast context to (num_steps, batch_size, num_hiddens)
context = tf.tile(tf.expand_dims(context, 0), (embs.shape[0], 1, 1))
# Concat at the feature dimension
embs_and_context = d2l.concat((embs, context), -1)
outputs, state = self.rnn(embs_and_context, enc_state)
outputs = d2l.transpose(self.dense(outputs), (1, 0, 2))
# outputs shape: (batch_size, num_steps, vocab_size)
# state shape: (num_layers, batch_size, num_hiddens)
return outputs, state
To [illustrate the implemented decoder], below we instantiate it with the same hyperparameters from the aforementioned encoder. As we can see, the output shape of the decoder becomes (batch size, number of time steps, vocabulary size), where the last dimension of the tensor stores the predicted token distribution.
%%tab all
decoder = Seq2SeqDecoder(vocab_size, embed_size, num_hiddens, num_layers)
state = decoder.init_state(encoder(X))
outputs, state = decoder(X, state)
d2l.check_shape(outputs, (batch_size, num_steps, vocab_size))
if tab.selected('mxnet', 'pytorch'):
d2l.check_shape(state, (num_layers, batch_size, num_hiddens))
if tab.selected('tensorflow'):
d2l.check_len(state, num_layers)
d2l.check_shape(state[0], (batch_size, num_hiddens))
To summarize,
the layers in the above RNN encoder-decoder model are illustrated in :numref:fig_seq2seq_details
.
Based on the architecture described
in :numref:sec_encoder-decoder
,
the RNN encoder-decoder
model for sequence to sequence learning just puts
the RNN encoder and the RNN decoder together.
%%tab all
class Seq2Seq(d2l.EncoderDecoder): #@save
def __init__(self, encoder, decoder, tgt_pad, lr):
super().__init__(encoder, decoder)
self.save_hyperparameters()
def validation_step(self, batch):
Y_hat = self(*batch[:-1])
self.plot('loss', self.loss(Y_hat, batch[-1]), train=False)
def configure_optimizers(self):
# Adam optimizer is used here
if tab.selected('mxnet'):
return gluon.Trainer(self.parameters(), 'adam',
{'learning_rate': self.lr})
if tab.selected('pytorch'):
return torch.optim.Adam(self.parameters(), lr=self.lr)
if tab.selected('tensorflow'):
return tf.keras.optimizers.Adam(learning_rate=self.lr)
At each time step, the decoder
predicts a probability distribution for the output tokens.
Similar to language modeling,
we can apply softmax to obtain the distribution
and calculate the cross-entropy loss for optimization.
Recall :numref:sec_machine_translation
that the special padding tokens
are appended to the end of sequences
so sequences of varying lengths
can be efficiently loaded
in minibatches of the same shape.
However,
prediction of padding tokens
should be excluded from loss calculations.
To this end,
we can
[mask irrelevant entries with zero values]
so that
multiplication of any irrelevant prediction
with zero equals to zero.
%%tab all
@d2l.add_to_class(Seq2Seq)
def loss(self, Y_hat, Y):
l = super(Seq2Seq, self).loss(Y_hat, Y, averaged=False)
mask = d2l.astype(d2l.reshape(Y, -1) != self.tgt_pad, d2l.float32)
return d2l.reduce_sum(l * mask) / d2l.reduce_sum(mask)
🏷️sec_seq2seq_training
Now we can [create and train an RNN encoder-decoder model] for sequence to sequence learning on the machine translation dataset.
%%tab all
data = d2l.MTFraEng(batch_size=128)
embed_size, num_hiddens, num_layers, dropout = 256, 256, 2, 0.2
if tab.selected('mxnet', 'pytorch'):
encoder = Seq2SeqEncoder(
len(data.src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqDecoder(
len(data.tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
model = Seq2Seq(encoder, decoder, tgt_pad=data.tgt_vocab['<pad>'],
lr=0.001)
trainer = d2l.Trainer(max_epochs=50, gradient_clip_val=1, num_gpus=1)
if tab.selected('tensorflow'):
with d2l.try_gpu():
encoder = Seq2SeqEncoder(
len(data.src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqDecoder(
len(data.tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
model = Seq2Seq(encoder, decoder, tgt_pad=data.tgt_vocab['<pad>'],
lr=0.001)
trainer = d2l.Trainer(max_epochs=50, gradient_clip_val=1)
trainer.fit(model, data)
To predict the output sequence
token by token,
at each decoder time step
the predicted token from the previous
time step is fed into the decoder as an input.
Similar to training,
at the initial time step
the beginning-of-sequence ("<bos>") token
is fed into the decoder.
This prediction process
is illustrated in :numref:fig_seq2seq_predict
.
When the end-of-sequence ("<eos>") token is predicted,
the prediction of the output sequence is complete.
We will introduce different
strategies for sequence generation in
:numref:sec_beam-search
.
%%tab all
@d2l.add_to_class(d2l.EncoderDecoder) #@save
def predict_step(self, batch, device, num_steps,
save_attention_weights=False):
if tab.selected('mxnet', 'pytorch'):
batch = [d2l.to(a, device) for a in batch]
src, tgt, src_valid_len, _ = batch
if tab.selected('mxnet', 'pytorch'):
enc_outputs = self.encoder(src, src_valid_len)
if tab.selected('tensorflow'):
enc_outputs = self.encoder(src, src_valid_len, training=False)
dec_state = self.decoder.init_state(enc_outputs, src_valid_len)
outputs, attention_weights = [d2l.expand_dims(tgt[:,0], 1), ], []
for _ in range(num_steps):
if tab.selected('mxnet', 'pytorch'):
Y, dec_state = self.decoder(outputs[-1], dec_state)
if tab.selected('tensorflow'):
Y, dec_state = self.decoder(outputs[-1], dec_state, training=False)
outputs.append(d2l.argmax(Y, 2))
# Save attention weights (to be covered later)
if save_attention_weights:
attention_weights.append(self.decoder.attention_weights)
return d2l.concat(outputs[1:], 1), attention_weights
We can evaluate a predicted sequence
by comparing it with the
label sequence (the ground-truth).
BLEU (Bilingual Evaluation Understudy),
though originally proposed for evaluating
machine translation results :cite:Papineni.Roukos.Ward.ea.2002
,
has been extensively used in measuring
the quality of output sequences for different applications.
In principle, for any
Denote by
$$ \exp\left(\min\left(0, 1 - \frac{\mathrm{len}{\text{label}}}{\mathrm{len}{\text{pred}}}\right)\right) \prod_{n=1}^k p_n^{1/2^n},$$
:eqlabel:eq_bleu
where
Based on the definition of BLEU in :eqref:eq_bleu
,
whenever the predicted sequence is the same as the label sequence, BLEU is 1.
Moreover,
since matching longer eq_bleu
penalizes shorter predicted sequences.
For example, when
We [implement the BLEU measure] as follows.
%%tab all
def bleu(pred_seq, label_seq, k): #@save
"""Compute the BLEU."""
pred_tokens, label_tokens = pred_seq.split(' '), label_seq.split(' ')
len_pred, len_label = len(pred_tokens), len(label_tokens)
score = math.exp(min(0, 1 - len_label / len_pred))
for n in range(1, min(k, len_pred) + 1):
num_matches, label_subs = 0, collections.defaultdict(int)
for i in range(len_label - n + 1):
label_subs[' '.join(label_tokens[i: i + n])] += 1
for i in range(len_pred - n + 1):
if label_subs[' '.join(pred_tokens[i: i + n])] > 0:
num_matches += 1
label_subs[' '.join(pred_tokens[i: i + n])] -= 1
score *= math.pow(num_matches / (len_pred - n + 1), math.pow(0.5, n))
return score
In the end, we use the trained RNN encoder-decoder to [translate a few English sentences into French] and compute the BLEU of the results.
%%tab all
engs = ['go .', 'i lost .', 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
preds, _ = model.predict_step(
data.build(engs, fras), d2l.try_gpu(), data.num_steps)
for en, fr, p in zip(engs, fras, preds):
translation = []
for token in data.tgt_vocab.to_tokens(p):
if token == '<eos>':
break
translation.append(token)
print(f'{en} => {translation}, bleu,'
f'{bleu(" ".join(translation), fr, k=2):.3f}')
- Following the design of the encoder-decoder architecture, we can use two RNNs to design a model for sequence to sequence learning.
- In encoder-decoder training, the teacher forcing approach feeds original output sequences (in contrast to predictions) into the decoder.
- When implementing the encoder and the decoder, we can use multilayer RNNs.
- We can use masks to filter out irrelevant computations, such as when calculating the loss.
- BLEU is a popular measure for evaluating output sequences by matching
$n$ -grams between the predicted sequence and the label sequence.
- Can you adjust the hyperparameters to improve the translation results?
- Rerun the experiment without using masks in the loss calculation. What results do you observe? Why?
- If the encoder and the decoder differ in the number of layers or the number of hidden units, how can we initialize the hidden state of the decoder?
- In training, replace teacher forcing with feeding the prediction at the previous time step into the decoder. How does this influence the performance?
- Rerun the experiment by replacing GRU with LSTM.
- Are there any other ways to design the output layer of the decoder?
:begin_tab:mxnet
Discussions
:end_tab:
:begin_tab:pytorch
Discussions
:end_tab:
:begin_tab:tensorflow
Discussions
:end_tab: