Skip to content

Commit

Permalink
padding fix in the adaptor layer (facebookresearch#4613)
Browse files Browse the repository at this point in the history
  • Loading branch information
uralik authored Jul 29, 2022
1 parent 0c5731f commit 4fe8583
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions fairseq/models/speech_to_text/xm_transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,9 @@ def forward(self, x, padding_mask: Optional[torch.Tensor]):
x = x + 0.5 * self.proj(x)
x = self.proj_ln(x)

if padding_mask is not None:
x = utils.index_put(x, padding_mask.T, 0)

# T x B x C -> B x C x T
x = x.transpose(0, 1).transpose(1, 2)
out_lens = None
Expand All @@ -108,6 +111,7 @@ def forward(self, x, padding_mask: Optional[torch.Tensor]):
out_padding_mask = None
if padding_mask is not None:
out_padding_mask = lengths_to_padding_mask(out_lens.long())
x = utils.index_put(x, out_padding_mask.T, 0)
return x, out_padding_mask


Expand Down

0 comments on commit 4fe8583

Please sign in to comment.