Skip to content

Commit

Permalink
Update 23_what-is-dynamic-padding.srt
Browse files Browse the repository at this point in the history
  • Loading branch information
nuass authored Feb 20, 2023
1 parent bf69b15 commit ca1ee22
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions subtitles/zh-CN/23_what-is-dynamic-padding.srt
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ to make all samples of length 128.

42
00:01:46,530 --> 00:01:48,360
结果,如果我们传递这个数据集
最后,如果我们传递这个数据集
As a result, if we pass this dataset

43
Expand All @@ -215,7 +215,7 @@ to a PyTorch DataLoader,

44
00:01:50,520 --> 00:01:55,503
我们得到形状批量大小的批次,这里是 16,乘以 128
我们得到形状为batch_size乘以16乘以128的批次
we get batches of shape batch size, here 16, by 128.

45
Expand All @@ -230,7 +230,7 @@ we must defer the padding to the batch preparation,

47
00:02:01,440 --> 00:02:04,740
所以我们从标记化函数中删除了那部分
所以我们从标记函数中删除了那部分
so we remove that part from our tokenize function.

48
Expand Down Expand Up @@ -310,12 +310,12 @@ all way below the 128 from before.

63
00:02:42,660 --> 00:02:44,820
动态批处理几乎总是更快
动态批处理几乎在CPU和GPU上更快,
Dynamic batching will almost always be faster

64
00:02:44,820 --> 00:02:47,913
在 CPU 和 GPU 上,所以如果可以的话你应该应用它。
所以如果可以的话你应该应用它。
on CPUs and GPUs, so you should apply it if you can.

65
Expand Down

0 comments on commit ca1ee22

Please sign in to comment.