Skip to content

Commit

Permalink
clean up docs
Browse files Browse the repository at this point in the history
  • Loading branch information
the-moliver committed Apr 29, 2015
1 parent 3d028fb commit 7dbe327
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/sources/activations.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ You can also specifiy a desired `target` for the average activation of hidden un
This is accomplished by a generalized KL divergence penalty on the activations. The weighting of this penalty is determined by `beta`:

```python
model.add(Activation('tanh', target=.05, beta=.1))
model.add(Activation('relu', target=.05, beta=.1))
```

## Available activations
Expand Down
9 changes: 5 additions & 4 deletions docs/sources/constraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,17 @@

Regularizers allow the use of penalties on particular sets of parameters during optimization.

A constraint is initilized with the value of the constraint: `maxnorm(1)`
A constraint is initilized with the value of the constraint.
For example `maxnorm(3)` will constrain the weight vector to each hidden unit to have a maximum norm of 3.
The keyword arguments used for passing constraints to parameters in a layer will depend on the layer.
For weights in the `Dense` layer it is simply `W_constraint`
For biases in the `Dense` layer it is simply `b_constraint`
For weights in the `Dense` layer it is simply `W_constraint`.
For biases in the `Dense` layer it is simply `b_constraint`.

```python
model.add(Dense(64, 64, W_constraint = maxnorm(2)))
```

## Available penalties
## Available constraints

- __maxnorm__: maximum-norm constraint
- __nonneg__: non-negative constraint
8 changes: 4 additions & 4 deletions docs/sources/regularizers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@

Regularizers allow the use of penalties on particular sets of parameters during optimization.

A penalty is initilized with its weight during optimization: `l1(.05)`
A penalty is initilized with its weight during optimization: `l1(.05)`.
The keyword arguments used for passing penalties to parameters in a layer will depend on the layer.
For weights in the `Dense` layer it is simply `W_regularizer`
For biases in the `Dense` layer it is simply `b_regularizer`
For weights in the `Dense` layer it is simply `W_regularizer`.
For biases in the `Dense` layer it is simply `b_regularizer`.

```python
model.add(Dense(64, 64, W_regularizer = l2(.01)))
Expand All @@ -16,4 +16,4 @@ model.add(Dense(64, 64, W_regularizer = l2(.01)))
## Available penalties

- __l1__: L1 regularization penalty, also known as LASSO
- __l2__: L2 regularization penalty, also known as weight decay
- __l2__: L2 regularization penalty, also known as weight decay, or Ridge

0 comments on commit 7dbe327

Please sign in to comment.