Skip to content

Commit

Permalink
Update net_layer_blob.md
Browse files Browse the repository at this point in the history
  • Loading branch information
yosssi committed Jul 12, 2015
1 parent 7e5608f commit 4ccc052
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/tutorial/net_layer_blob.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Blobs conceal the computational and mental overhead of mixed CPU/GPU operation b

The conventional blob dimensions for batches of image data are number N x channel K x height H x width W. Blob memory is row-major in layout, so the last / rightmost dimension changes fastest. For example, in a 4D blob, the value at index (n, k, h, w) is physically located at index ((n * K + k) * H + h) * W + w.

- Number / N is the batch size of the data. Batch processing achieves better throughput for communication and device processing. For an ImageNet training batch of 256 images B = 256.
- Number / N is the batch size of the data. Batch processing achieves better throughput for communication and device processing. For an ImageNet training batch of 256 images N = 256.
- Channel / K is the feature dimension e.g. for RGB images K = 3.

Note that although many blobs in Caffe examples are 4D with axes for image applications, it is totally valid to use blobs for non-image applications. For example, if you simply need fully-connected networks like the conventional multi-layer perceptron, use 2D blobs (shape (N, D)) and call the InnerProductLayer (which we will cover soon).
Expand Down

0 comments on commit 4ccc052

Please sign in to comment.