Skip to content

Commit

Permalink
Correct typos and grammar for layers documentation (fastai#2409)
Browse files Browse the repository at this point in the history
  • Loading branch information
enzoampil authored and sgugger committed Nov 14, 2019
1 parent 22b32aa commit 8013797
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
8 changes: 4 additions & 4 deletions docs_src/layers.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -912,9 +912,9 @@
"source": [
"Create an instance of `func` with `args` and `kwargs`. When passing an output and target, it\n",
"- puts `axis` first in output and target with a transpose\n",
"- casts the target to `float` is `floatify=True`\n",
"- casts the target to `float` if `floatify=True`\n",
"- squeezes the `output` to two dimensions if `is_2d`, otherwise one dimension, squeezes the target to one dimension\n",
"- applied the instance of `func`."
"- applies the instance of `func`."
]
},
{
Expand Down Expand Up @@ -1138,7 +1138,7 @@
"source": [
"The [`bn_drop_lin`](/layers.html#bn_drop_lin) function returns a sequence of [batch normalization](https://arxiv.org/abs/1502.03167), [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) and a linear layer. This custom layer is usually used at the end of a model. \n",
"\n",
"`n_in` represents the number of size of the input `n_out` the size of the output, `bn` whether we want batch norm or not, `p` is how much dropout and `actn` is an optional parameter to add an activation function at the end."
"`n_in` represents the size of the input, `n_out` the size of the output, `bn` whether we want batch norm or not, `p` how much dropout, and `actn` (optional parameter) adds an activation function at the end."
]
},
{
Expand Down Expand Up @@ -1235,7 +1235,7 @@
"source": [
"The [`conv_layer`](/layers.html#conv_layer) function returns a sequence of [nn.Conv2D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d), [BatchNorm](https://arxiv.org/abs/1502.03167) and a ReLU or [leaky RELU](https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf) activation function.\n",
"\n",
"`n_in` represents the number of size of the input `n_out` the size of the output, `ks` kernel size, `stride` the stride with which we want to apply the convolutions. `bias` will decide if they have bias or not (if None, defaults to True unless using batchnorm). `norm_type` selects type of normalization (or `None`). If `leaky` is None, the activation is a standard `ReLU`, otherwise it's a `LeakyReLU` of slope `leaky`. Finally if `transpose=True`, the convolution is replaced by a `ConvTranspose2D`."
"`n_in` represents the size of the input, `n_out` the size of the output, `ks` the kernel size, `stride` the stride with which we want to apply the convolutions. `bias` will decide if they have bias or not (if None, defaults to True unless using batchnorm). `norm_type` selects the type of normalization (or `None`). If `leaky` is None, the activation is a standard `ReLU`, otherwise it's a `LeakyReLU` of slope `leaky`. Finally if `transpose=True`, the convolution is replaced by a `ConvTranspose2D`."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion fastai/layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ def extend(self,l): return self.layers.extend(l)
def insert(self,i,l): return self.layers.insert(i,l)

class MergeLayer(Module):
"Merge a shortcut with the result of the module by adding them or concatenating thme if `dense=True`."
"Merge a shortcut with the result of the module by adding them or concatenating them if `dense=True`."
def __init__(self, dense:bool=False): self.dense=dense
def forward(self, x): return torch.cat([x,x.orig], dim=1) if self.dense else (x+x.orig)

Expand Down

0 comments on commit 8013797

Please sign in to comment.