Skip to content

Commit

Permalink
Merge branch 'master' of github.com:cs231n/cs231n.github.io
Browse files Browse the repository at this point in the history
  • Loading branch information
karpathy committed Feb 7, 2016
2 parents 3468db8 + a12d545 commit 7e061e7
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion assignments2016/assignment2.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ assumes that your virtual environment is named `.env`.
### Submitting your work:
Whether you work on the assignment locally or using Terminal, once you are done
working run the `collectSubmission.sh` script; this will produce a file called
`assignment2.zip`. Upload this file to your dropbox on
`assignment2.zip`. Upload this file under the Assignments tab on
[the coursework](https://coursework.stanford.edu/portal/site/W15-CS-231N-01/)
page for the course.

Expand Down
4 changes: 2 additions & 2 deletions convolutional-networks.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ Remember that in numpy, the operation `*` above denotes elementwise multiplicati
- `V[0,1,1] = np.sum(X[:5,2:7,:] * W1) + b1` (example of going along y)
- `V[2,3,1] = np.sum(X[4:9,6:11,:] * W1) + b1` (or along both)

where we see that we are indexing into the second depth dimension in `V` (at index 1) because we are computing the second activation map, and that a different set of parameters (`W1`) is now used. In the example above, we are for brevity leaving out some of the other operatations the Conv Layer would perform to fill the other parts of the output array `V`. Additioanlly, recall that these activation maps are often followed elementwise through an activation function such as ReLU, but this is not shown here.
where we see that we are indexing into the second depth dimension in `V` (at index 1) because we are computing the second activation map, and that a different set of parameters (`W1`) is now used. In the example above, we are for brevity leaving out some of the other operatations the Conv Layer would perform to fill the other parts of the output array `V`. Additionally, recall that these activation maps are often followed elementwise through an activation function such as ReLU, but this is not shown here.

**Summary**. To summarize, the Conv Layer:

Expand Down Expand Up @@ -240,7 +240,7 @@ Neurons in a fully connected layer have full connections to all activations in t
<a name='convert'></a>
#### Converting FC layers to CONV layers

It is worth noting that the only difference between FC and CONV layers is that the neurons in the CONV layer are connected only to a local region in the input, and that many of the neurons in a CONV volume share neurons. However, the neurons in both layers still compute dot products, so their functional form is identical. Therefore, it turns out that it's possible to convert between FC and CONV layers:
It is worth noting that the only difference between FC and CONV layers is that the neurons in the CONV layer are connected only to a local region in the input, and that many of the neurons in a CONV volume share parameters. However, the neurons in both layers still compute dot products, so their functional form is identical. Therefore, it turns out that it's possible to convert between FC and CONV layers:

- For any CONV layer there is an FC layer that implements the same forward function. The weight matrix would be a large matrix that is mostly zero except for at certian blocks (due to local connectivity) where the weights in many of the blocks are equal (due to parameter sharing).
- Conversely, any FC layer can be converted to a CONV layer. For example, an FC layer with \\(K = 4096\\) that is looking at some input volume of size \\(7 \times 7 \times 512\\) can be equivalently expressed as a CONV layer with \\(F = 7, P = 0, S = 1, K = 4096\\). In other words, we are setting the filter size to be exactly the size of the input volume, and hence the output will simply be \\(1 \times 1 \times 4096\\) since only a single depth column "fits" across the input volume, giving identical result as the initial FC layer.
Expand Down
8 changes: 4 additions & 4 deletions python-numpy-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ print xs[-1] # Negative indices count from the end of the list; prints "2"
xs[2] = 'foo' # Lists can contain elements of different types
print xs # Prints "[3, 1, 'foo']"
xs.append('bar') # Add a new element to the end of the list
print xs # Prints
print xs # Prints "[3, 1, 'foo', 'bar']"
x = xs.pop() # Remove and return the last element of the list
print x, xs # Prints "bar [3, 1, 'foo']"
```
Expand All @@ -203,7 +203,7 @@ print nums[:2] # Get a slice from the start to index 2 (exclusive); prints "
print nums[:] # Get a slice of the whole list; prints ["0, 1, 2, 3, 4]"
print nums[:-1] # Slice indices can be negative; prints ["0, 1, 2, 3]"
nums[2:4] = [8, 9] # Assign a new sublist to a slice
print nums # Prints "[0, 1, 8, 8, 4]"
print nums # Prints "[0, 1, 8, 9, 4]"
```
We will see slicing again in the context of numpy arrays.

Expand Down Expand Up @@ -385,9 +385,9 @@ We will often define functions to take optional keyword arguments, like this:
```python
def hello(name, loud=False):
if loud:
print 'HELLO, %s' % name.upper()
print 'HELLO, %s!' % name.upper()
else:
print 'Hello, %s!' % name
print 'Hello, %s' % name

hello('Bob') # Prints "Hello, Bob"
hello('Fred', loud=True) # Prints "HELLO, FRED!"
Expand Down

0 comments on commit 7e061e7

Please sign in to comment.