Skip to content

Commit

Permalink
Merge pull request libfann#36 from afck/master
Browse files Browse the repository at this point in the history
Fix typos in doc comments.
  • Loading branch information
steffennissen committed Jul 11, 2015
2 parents 0fd8ce6 + 47ff9e1 commit 591d3bc
Show file tree
Hide file tree
Showing 7 changed files with 79 additions and 79 deletions.
22 changes: 11 additions & 11 deletions src/include/fann.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
and executed by <fann_run>.
All of this can be done without much knowledge of the internals of ANNs, although the ANNs created will
still be powerfull and effective. If you have more knowledge about ANNs, and desire more control, almost
every part of the ANNs can be parametized to create specialized and highly optimal ANNs.
still be powerful and effective. If you have more knowledge about ANNs, and desire more control, almost
every part of the ANNs can be parametrized to create specialized and highly optimal ANNs.
*/
/* Group: Creation, Destruction & Execution */

Expand Down Expand Up @@ -159,7 +159,7 @@ extern "C"
Example:
> // Creating an ANN with 2 input neurons, 1 output neuron,
> // and two hidden neurons with 8 and 9 neurons
> // and two hidden layers with 8 and 9 neurons
> struct fann *ann = fann_create_standard(4, 2, 8, 9, 1);
See also:
Expand All @@ -175,7 +175,7 @@ FANN_EXTERNAL struct fann *FANN_API fann_create_standard(unsigned int num_layers
Example:
> // Creating an ANN with 2 input neurons, 1 output neuron,
> // and two hidden neurons with 8 and 9 neurons
> // and two hidden layers with 8 and 9 neurons
> unsigned int layers[4] = {2, 8, 9, 1};
> struct fann *ann = fann_create_standard_array(4, layers);
Expand Down Expand Up @@ -233,7 +233,7 @@ FANN_EXTERNAL struct fann *FANN_API fann_create_sparse_array(float connection_ra
also has shortcut connections.
Shortcut connections are connections that skip layers. A fully connected network with shortcut
connections, is a network where all neurons are connected to all neurons in later layers.
connections is a network where all neurons are connected to all neurons in later layers.
Including direct connections from the input layer to the output layer.
See <fann_create_standard> for a description of the parameters.
Expand All @@ -259,7 +259,7 @@ FANN_EXTERNAL struct fann *FANN_API fann_create_shortcut(unsigned int num_layers
FANN_EXTERNAL struct fann *FANN_API fann_create_shortcut_array(unsigned int num_layers,
const unsigned int *layers);
/* Function: fann_destroy
Destroys the entire network and properly freeing all the associated memmory.
Destroys the entire network, properly freeing all the associated memory.
This function appears in FANN >= 1.0.0.
*/
Expand Down Expand Up @@ -331,15 +331,15 @@ FANN_EXTERNAL void FANN_API fann_init_weights(struct fann *ann, struct fann_trai
>L 2 / N 6 ...BBA
>L 2 / N 7 ......
This network have five real neurons and two bias neurons. This gives a total of seven neurons
This network has five real neurons and two bias neurons. This gives a total of seven neurons
named from 0 to 6. The connections between these neurons can be seen in the matrix. "." is a
place where there is no connection, while a character tells how strong the connection is on a
scale from a-z. The two real neurons in the hidden layer (neuron 3 and 4 in layer 1) has
connection from the three neurons in the previous layer as is visible in the first two lines.
The output neuron (6) has connections form the three neurons in the hidden layer 3 - 5 as is
scale from a-z. The two real neurons in the hidden layer (neuron 3 and 4 in layer 1) have
connections from the three neurons in the previous layer as is visible in the first two lines.
The output neuron (6) has connections from the three neurons in the hidden layer 3 - 5 as is
visible in the fourth line.
To simplify the matrix output neurons is not visible as neurons that connections can come from,
To simplify the matrix output neurons are not visible as neurons that connections can come from,
and input and bias neurons are not visible as neurons that connections can go to.
This function appears in FANN >= 1.2.0.
Expand Down
16 changes: 8 additions & 8 deletions src/include/fann_cascade.h
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA

/* Section: FANN Cascade Training
Cascade training differs from ordinary training in the sense that it starts with an empty neural network
and then adds neurons one by one, while it trains the neural network. The main benefit of this approach,
and then adds neurons one by one, while it trains the neural network. The main benefit of this approach
is that you do not have to guess the number of hidden layers and neurons prior to training, but cascade
training have also proved better at solving some problems.
training has also proved better at solving some problems.
The basic idea of cascade training is that a number of candidate neurons are trained separate from the
real network, then the most promissing of these candidate neurons is inserted into the neural network.
Then the output connections are trained and new candidate neurons is prepared. The candidate neurons are
created as shorcut connected neurons in a new hidden layer, which means that the final neural network
will consist of a number of hidden layers with one shorcut connected neuron in each.
real network, then the most promising of these candidate neurons is inserted into the neural network.
Then the output connections are trained and new candidate neurons are prepared. The candidate neurons are
created as shortcut connected neurons in a new hidden layer, which means that the final neural network
will consist of a number of hidden layers with one shortcut connected neuron in each.
*/

/* Group: Cascade Training */
Expand Down Expand Up @@ -102,7 +102,7 @@ FANN_EXTERNAL void FANN_API fann_cascadetrain_on_file(struct fann *ann, const ch
If the cascade output change fraction is low, the output connections will be trained more and if the
fraction is high they will be trained less.
The default cascade output change fraction is 0.01, which is equalent to a 1% change in MSE.
The default cascade output change fraction is 0.01, which is equivalent to a 1% change in MSE.
See also:
<fann_set_cascade_output_change_fraction>, <fann_get_MSE>, <fann_get_cascade_output_stagnation_epochs>
Expand Down Expand Up @@ -169,7 +169,7 @@ FANN_EXTERNAL void FANN_API fann_set_cascade_output_stagnation_epochs(struct fan
If the cascade candidate change fraction is low, the candidate neurons will be trained more and if the
fraction is high they will be trained less.
The default cascade candidate change fraction is 0.01, which is equalent to a 1% change in MSE.
The default cascade candidate change fraction is 0.01, which is equivalent to a 1% change in MSE.
See also:
<fann_set_cascade_candidate_change_fraction>, <fann_get_MSE>, <fann_get_cascade_candidate_stagnation_epochs>
Expand Down
20 changes: 10 additions & 10 deletions src/include/fann_cpp.h
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,8 @@ namespace FANN
ERRORFUNC_LINEAR - Standard linear error function.
ERRORFUNC_TANH - Tanh error function, usually better
but can require a lower learning rate. This error function agressively targets outputs that
differ much from the desired, while not targetting outputs that only differ a little that much.
but can require a lower learning rate. This error function aggressively targets outputs that
differ much from the desired, while not targeting outputs that only differ a little that much.
This activation function is not recommended for cascade training and incremental training.
See also:
Expand Down Expand Up @@ -143,7 +143,7 @@ namespace FANN
this algorithm, while other more advanced problems will not train very well.
TRAIN_BATCH - Standard backpropagation algorithm, where the weights are updated after
calculating the mean square error for the whole training set. This means that the weights
are only updated once during a epoch. For this reason some problems, will train slower with
are only updated once during an epoch. For this reason some problems, will train slower with
this algorithm. But since the mean square error is calculated more correctly than in
incremental training, some problems will reach a better solutions with this algorithm.
TRAIN_RPROP - A more advanced batch training algorithm which achieves good results
Expand All @@ -153,7 +153,7 @@ namespace FANN
training algorithm works. The RPROP training algorithm is described by
[Riedmiller and Braun, 1993], but the actual learning algorithm used here is the
iRPROP- training algorithm which is described by [Igel and Husken, 2000] which
is an variety of the standard RPROP training algorithm.
is a variant of the standard RPROP training algorithm.
TRAIN_QUICKPROP - A more advanced batch training algorithm which achieves good results
for many problems. The quickprop training algorithm uses the learning_rate parameter
along with other more advanced parameters, but it is only recommended to change these
Expand Down Expand Up @@ -327,7 +327,7 @@ namespace FANN
> unsigned int max_epochs, unsigned int epochs_between_reports,
> float desired_error, unsigned int epochs, void *user_data);
The callback can be set by using <neural_net::set_callback> and is very usefull for doing custom
The callback can be set by using <neural_net::set_callback> and is very useful for doing custom
things during training. It is recommended to use this function when implementing custom
training procedures, or when visualizing the training in a GUI etc. The parameters which the
callback function takes is the parameters given to the <neural_net::train_on_data>, plus an epochs
Expand Down Expand Up @@ -474,7 +474,7 @@ namespace FANN
Saves the training structure to a fixed point data file.
This function is very usefull for testing the quality of a fixed point network.
This function is very useful for testing the quality of a fixed point network.
Return:
The function returns true on success and false on failure.
Expand Down Expand Up @@ -1246,7 +1246,7 @@ namespace FANN
But it is saved in fixed point format no matter which
format it is currently in.
This is usefull for training a network in floating points,
This is useful for training a network in floating points,
and then later executing it in fixed point.
The function returns the bit position of the fix point, which
Expand Down Expand Up @@ -1649,7 +1649,7 @@ namespace FANN
When choosing an activation function it is important to note that the activation
functions have different range. FANN::SIGMOID is e.g. in the 0 - 1 range while
FANN::SIGMOID_SYMMETRIC is in the -1 - 1 range and FANN::LINEAR is unbound.
FANN::SIGMOID_SYMMETRIC is in the -1 - 1 range and FANN::LINEAR is unbounded.
Information about the individual activation functions is available at <FANN::activation_function_enum>.
Expand Down Expand Up @@ -1743,7 +1743,7 @@ namespace FANN
The steepness of an activation function says something about how fast the activation function
goes from the minimum to the maximum. A high value for the activation function will also
give a more agressive training.
give a more aggressive training.
When training neural networks where the output values should be at the extremes (usually 0 and 1,
depending on the activation function), a steep activation function can be used (e.g. 1.0).
Expand Down Expand Up @@ -1779,7 +1779,7 @@ namespace FANN
The steepness of an activation function says something about how fast the activation function
goes from the minimum to the maximum. A high value for the activation function will also
give a more agressive training.
give a more aggressive training.
When training neural networks where the output values should be at the extremes (usually 0 and 1,
depending on the activation function), a steep activation function can be used (e.g. 1.0).
Expand Down
34 changes: 17 additions & 17 deletions src/include/fann_data.h
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA

/* Section: FANN Datatypes
The two main datatypes used in the fann library is <struct fann>,
The two main datatypes used in the fann library are <struct fann>,
which represents an artificial neural network, and <struct fann_train_data>,
which represent training data.
which represents training data.
*/


Expand All @@ -42,27 +42,27 @@ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA

/* Enum: fann_train_enum
The Training algorithms used when training on <struct fann_train_data> with functions like
<fann_train_on_data> or <fann_train_on_file>. The incremental training looks alters the weights
<fann_train_on_data> or <fann_train_on_file>. The incremental training alters the weights
after each time it is presented an input pattern, while batch only alters the weights once after
it has been presented to all the patterns.
FANN_TRAIN_INCREMENTAL - Standard backpropagation algorithm, where the weights are
updated after each training pattern. This means that the weights are updated many
times during a single epoch. For this reason some problems, will train very fast with
times during a single epoch. For this reason some problems will train very fast with
this algorithm, while other more advanced problems will not train very well.
FANN_TRAIN_BATCH - Standard backpropagation algorithm, where the weights are updated after
calculating the mean square error for the whole training set. This means that the weights
are only updated once during a epoch. For this reason some problems, will train slower with
are only updated once during an epoch. For this reason some problems will train slower with
this algorithm. But since the mean square error is calculated more correctly than in
incremental training, some problems will reach a better solutions with this algorithm.
incremental training, some problems will reach better solutions with this algorithm.
FANN_TRAIN_RPROP - A more advanced batch training algorithm which achieves good results
for many problems. The RPROP training algorithm is adaptive, and does therefore not
use the learning_rate. Some other parameters can however be set to change the way the
RPROP algorithm works, but it is only recommended for users with insight in how the RPROP
training algorithm works. The RPROP training algorithm is described by
[Riedmiller and Braun, 1993], but the actual learning algorithm used here is the
iRPROP- training algorithm which is described by [Igel and Husken, 2000] which
is an variety of the standard RPROP training algorithm.
is a variant of the standard RPROP training algorithm.
FANN_TRAIN_QUICKPROP - A more advanced batch training algorithm which achieves good results
for many problems. The quickprop training algorithm uses the learning_rate parameter
along with other more advanced parameters, but it is only recommended to change these
Expand Down Expand Up @@ -264,8 +264,8 @@ static char const *const FANN_ACTIVATIONFUNC_NAMES[] = {
FANN_ERRORFUNC_LINEAR - Standard linear error function.
FANN_ERRORFUNC_TANH - Tanh error function, usually better
but can require a lower learning rate. This error function agressively targets outputs that
differ much from the desired, while not targetting outputs that only differ a little that much.
but can require a lower learning rate. This error function aggressively targets outputs that
differ much from the desired, while not targeting outputs that only differ a little that much.
This activation function is not recommended for cascade training and incremental training.
See also:
Expand Down Expand Up @@ -296,8 +296,8 @@ static char const *const FANN_ERRORFUNC_NAMES[] = {
/* Enum: fann_stopfunc_enum
Stop criteria used during training.
FANN_STOPFUNC_MSE - Stop criteria is Mean Square Error (MSE) value.
FANN_STOPFUNC_BIT - Stop criteria is number of bits that fail. The number of bits; means the
FANN_STOPFUNC_MSE - Stop criterion is Mean Square Error (MSE) value.
FANN_STOPFUNC_BIT - Stop criterion is number of bits that fail. The number of bits; means the
number of output neurons which differ more than the bit fail limit
(see <fann_get_bit_fail_limit>, <fann_set_bit_fail_limit>).
The bits are counted in all of the training data, so this number can be higher than
Expand Down Expand Up @@ -377,11 +377,11 @@ struct fann_train_data;
> unsigned int epochs_between_reports,
> float desired_error, unsigned int epochs);
The callback can be set by using <fann_set_callback> and is very usefull for doing custom
The callback can be set by using <fann_set_callback> and is very useful for doing custom
things during training. It is recommended to use this function when implementing custom
training procedures, or when visualizing the training in a GUI etc. The parameters which the
callback function takes is the parameters given to the <fann_train_on_data>, plus an epochs
parameter which tells how many epochs the training have taken so far.
callback function takes are the parameters given to <fann_train_on_data>, plus an epochs
parameter which tells how many epochs the training has taken so far.
The callback function should return an integer, if the callback function returns -1, the training
will terminate.
Expand Down Expand Up @@ -462,7 +462,7 @@ struct fann_error


/* Struct: struct fann
The fast artificial neural network(fann) structure.
The fast artificial neural network (fann) structure.
Data within this structure should never be accessed directly, but only by using the
*fann_get_...* and *fann_set_...* functions.
Expand Down Expand Up @@ -513,7 +513,7 @@ struct fann
struct fann_layer *last_layer;

/* Total number of neurons.
* very usefull, because the actual neurons are allocated in one long array
* very useful, because the actual neurons are allocated in one long array
*/
unsigned int total_neurons;

Expand Down Expand Up @@ -563,7 +563,7 @@ struct fann
#endif

/* Total number of connections.
* very usefull, because the actual connections
* very useful, because the actual connections
* are allocated in one long array
*/
unsigned int total_connections;
Expand Down
2 changes: 1 addition & 1 deletion src/include/fann_error.h
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ enum fann_errno_enum
If log_file is NULL, no errors will be printed.
If errdata is NULL, the default log will be set. The default log is the log used when creating
If errdat is NULL, the default log will be set. The default log is the log used when creating
<struct fann> and <struct fann_data>. This default log will also be the default for all new structs
that are created.
Expand Down
Loading

0 comments on commit 591d3bc

Please sign in to comment.