Skip to content

Commit

Permalink
Merge pull request cs231n#84 from Vincibean/patch-1
Browse files Browse the repository at this point in the history
Fixed little typos in convolutional-networks.md
  • Loading branch information
karpathy committed Mar 29, 2016
2 parents 96f1d89 + b9cb6da commit a493a8c
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions convolutional-networks.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ Table of Contents:

## Convolutional Neural Networks (CNNs / ConvNets)

Convolutional Neural Networks are very similar to ordinary Neural Networks from the previous chapter: They are made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still express a single differentiable score function: From the raw image pixels on one end to class scores at the other. And they still have a loss function (e.g. SVM/Softmax) on the last (fully-connected) layer and all the tips/tricks we developed for learning regular Neural Networks still apply.
Convolutional Neural Networks are very similar to ordinary Neural Networks from the previous chapter: they are made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still expresses a single differentiable score function: from the raw image pixels on one end to class scores at the other. And they still have a loss function (e.g. SVM/Softmax) on the last (fully-connected) layer and all the tips/tricks we developed for learning regular Neural Networks still apply.

So what does change? ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduces the amount of parameters in the network.
So what does change? ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network.

<a name='overview'></a>
### Architecture Overview
Expand Down

0 comments on commit a493a8c

Please sign in to comment.