Skip to content

Commit

Permalink
Fix typos in doc comments.
Browse files Browse the repository at this point in the history
  • Loading branch information
afck committed Jun 23, 2015
1 parent 0fd8ce6 commit dba736f
Show file tree
Hide file tree
Showing 6 changed files with 31 additions and 31 deletions.
6 changes: 3 additions & 3 deletions src/include/fann.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
and executed by <fann_run>.
All of this can be done without much knowledge of the internals of ANNs, although the ANNs created will
still be powerfull and effective. If you have more knowledge about ANNs, and desire more control, almost
every part of the ANNs can be parametized to create specialized and highly optimal ANNs.
still be powerful and effective. If you have more knowledge about ANNs, and desire more control, almost
every part of the ANNs can be parametrized to create specialized and highly optimal ANNs.
*/
/* Group: Creation, Destruction & Execution */

Expand Down Expand Up @@ -259,7 +259,7 @@ FANN_EXTERNAL struct fann *FANN_API fann_create_shortcut(unsigned int num_layers
FANN_EXTERNAL struct fann *FANN_API fann_create_shortcut_array(unsigned int num_layers,
const unsigned int *layers);
/* Function: fann_destroy
Destroys the entire network and properly freeing all the associated memmory.
Destroys the entire network and properly freeing all the associated memory.
This function appears in FANN >= 1.0.0.
*/
Expand Down
10 changes: 5 additions & 5 deletions src/include/fann_cascade.h
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,10 @@ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
training have also proved better at solving some problems.
The basic idea of cascade training is that a number of candidate neurons are trained separate from the
real network, then the most promissing of these candidate neurons is inserted into the neural network.
real network, then the most promising of these candidate neurons is inserted into the neural network.
Then the output connections are trained and new candidate neurons is prepared. The candidate neurons are
created as shorcut connected neurons in a new hidden layer, which means that the final neural network
will consist of a number of hidden layers with one shorcut connected neuron in each.
created as shortcut connected neurons in a new hidden layer, which means that the final neural network
will consist of a number of hidden layers with one shortcut connected neuron in each.
*/

/* Group: Cascade Training */
Expand Down Expand Up @@ -102,7 +102,7 @@ FANN_EXTERNAL void FANN_API fann_cascadetrain_on_file(struct fann *ann, const ch
If the cascade output change fraction is low, the output connections will be trained more and if the
fraction is high they will be trained less.
The default cascade output change fraction is 0.01, which is equalent to a 1% change in MSE.
The default cascade output change fraction is 0.01, which is equivalent to a 1% change in MSE.
See also:
<fann_set_cascade_output_change_fraction>, <fann_get_MSE>, <fann_get_cascade_output_stagnation_epochs>
Expand Down Expand Up @@ -169,7 +169,7 @@ FANN_EXTERNAL void FANN_API fann_set_cascade_output_stagnation_epochs(struct fan
If the cascade candidate change fraction is low, the candidate neurons will be trained more and if the
fraction is high they will be trained less.
The default cascade candidate change fraction is 0.01, which is equalent to a 1% change in MSE.
The default cascade candidate change fraction is 0.01, which is equivalent to a 1% change in MSE.
See also:
<fann_set_cascade_candidate_change_fraction>, <fann_get_MSE>, <fann_get_cascade_candidate_stagnation_epochs>
Expand Down
14 changes: 7 additions & 7 deletions src/include/fann_cpp.h
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,8 @@ namespace FANN
ERRORFUNC_LINEAR - Standard linear error function.
ERRORFUNC_TANH - Tanh error function, usually better
but can require a lower learning rate. This error function agressively targets outputs that
differ much from the desired, while not targetting outputs that only differ a little that much.
but can require a lower learning rate. This error function aggressively targets outputs that
differ much from the desired, while not targeting outputs that only differ a little that much.
This activation function is not recommended for cascade training and incremental training.
See also:
Expand Down Expand Up @@ -327,7 +327,7 @@ namespace FANN
> unsigned int max_epochs, unsigned int epochs_between_reports,
> float desired_error, unsigned int epochs, void *user_data);
The callback can be set by using <neural_net::set_callback> and is very usefull for doing custom
The callback can be set by using <neural_net::set_callback> and is very useful for doing custom
things during training. It is recommended to use this function when implementing custom
training procedures, or when visualizing the training in a GUI etc. The parameters which the
callback function takes is the parameters given to the <neural_net::train_on_data>, plus an epochs
Expand Down Expand Up @@ -474,7 +474,7 @@ namespace FANN
Saves the training structure to a fixed point data file.
This function is very usefull for testing the quality of a fixed point network.
This function is very useful for testing the quality of a fixed point network.
Return:
The function returns true on success and false on failure.
Expand Down Expand Up @@ -1246,7 +1246,7 @@ namespace FANN
But it is saved in fixed point format no matter which
format it is currently in.
This is usefull for training a network in floating points,
This is useful for training a network in floating points,
and then later executing it in fixed point.
The function returns the bit position of the fix point, which
Expand Down Expand Up @@ -1743,7 +1743,7 @@ namespace FANN
The steepness of an activation function says something about how fast the activation function
goes from the minimum to the maximum. A high value for the activation function will also
give a more agressive training.
give a more aggressive training.
When training neural networks where the output values should be at the extremes (usually 0 and 1,
depending on the activation function), a steep activation function can be used (e.g. 1.0).
Expand Down Expand Up @@ -1779,7 +1779,7 @@ namespace FANN
The steepness of an activation function says something about how fast the activation function
goes from the minimum to the maximum. A high value for the activation function will also
give a more agressive training.
give a more aggressive training.
When training neural networks where the output values should be at the extremes (usually 0 and 1,
depending on the activation function), a steep activation function can be used (e.g. 1.0).
Expand Down
10 changes: 5 additions & 5 deletions src/include/fann_data.h
Original file line number Diff line number Diff line change
Expand Up @@ -264,8 +264,8 @@ static char const *const FANN_ACTIVATIONFUNC_NAMES[] = {
FANN_ERRORFUNC_LINEAR - Standard linear error function.
FANN_ERRORFUNC_TANH - Tanh error function, usually better
but can require a lower learning rate. This error function agressively targets outputs that
differ much from the desired, while not targetting outputs that only differ a little that much.
but can require a lower learning rate. This error function aggressively targets outputs that
differ much from the desired, while not targeting outputs that only differ a little that much.
This activation function is not recommended for cascade training and incremental training.
See also:
Expand Down Expand Up @@ -377,7 +377,7 @@ struct fann_train_data;
> unsigned int epochs_between_reports,
> float desired_error, unsigned int epochs);
The callback can be set by using <fann_set_callback> and is very usefull for doing custom
The callback can be set by using <fann_set_callback> and is very useful for doing custom
things during training. It is recommended to use this function when implementing custom
training procedures, or when visualizing the training in a GUI etc. The parameters which the
callback function takes is the parameters given to the <fann_train_on_data>, plus an epochs
Expand Down Expand Up @@ -513,7 +513,7 @@ struct fann
struct fann_layer *last_layer;

/* Total number of neurons.
* very usefull, because the actual neurons are allocated in one long array
* very useful, because the actual neurons are allocated in one long array
*/
unsigned int total_neurons;

Expand Down Expand Up @@ -563,7 +563,7 @@ struct fann
#endif

/* Total number of connections.
* very usefull, because the actual connections
* very useful, because the actual connections
* are allocated in one long array
*/
unsigned int total_connections;
Expand Down
4 changes: 2 additions & 2 deletions src/include/fann_io.h
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ FANN_EXTERNAL int FANN_API fann_save(struct fann *ann, const char *configuration
But it is saved in fixed point format no matter which
format it is currently in.
This is usefull for training a network in floating points,
This is useful for training a network in floating points,
and then later executing it in fixed point.
The function returns the bit position of the fix point, which
Expand All @@ -80,7 +80,7 @@ FANN_EXTERNAL int FANN_API fann_save(struct fann *ann, const char *configuration
A negative value indicates very low precision, and a very
strong possibility for overflow.
(the actual fix point will be set to 0, since a negative
fix point does not make sence).
fix point does not make sense).
Generally, a fix point lower than 6 is bad, and should be avoided.
The best way to avoid this, is to have less connections to each neuron,
Expand Down
18 changes: 9 additions & 9 deletions src/include/fann_train.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
There are many different ways of training neural networks and the FANN library supports
a number of different approaches.
Two fundementally different approaches are the most commonly used:
Two fundamentally different approaches are the most commonly used:
Fixed topology training - The size and topology of the ANN is determined in advance
and the training alters the weights in order to minimize the difference between
Expand Down Expand Up @@ -234,15 +234,15 @@ FANN_EXTERNAL float FANN_API fann_test_data(struct fann *ann, struct fann_train_
The file must be formatted like:
>num_train_data num_input num_output
>inputdata seperated by space
>outputdata seperated by space
>inputdata separated by space
>outputdata separated by space
>
>.
>.
>.
>
>inputdata seperated by space
>outputdata seperated by space
>inputdata separated by space
>outputdata separated by space
See also:
<fann_train_on_data>, <fann_destroy_train>, <fann_save_train>
Expand Down Expand Up @@ -305,7 +305,7 @@ FANN_EXTERNAL struct fann_train_data * FANN_API fann_create_train_array(unsigned
num_data - The number of training data
num_input - The number of inputs per training data
num_output - The number of ouputs per training data
user_function - The user suplied function
user_function - The user supplied function
Parameters for the user function:
num - The number of the training data set
Expand Down Expand Up @@ -663,7 +663,7 @@ FANN_EXTERNAL int FANN_API fann_save_train(struct fann_train_data *data, const c
Saves the training structure to a fixed point data file.
This function is very usefull for testing the quality of a fixed point network.
This function is very useful for testing the quality of a fixed point network.
Return:
The function returns 0 on success and -1 on failure.
Expand Down Expand Up @@ -874,7 +874,7 @@ FANN_EXTERNAL void FANN_API fann_set_activation_function_output(struct fann *ann
The steepness of an activation function says something about how fast the activation function
goes from the minimum to the maximum. A high value for the activation function will also
give a more agressive training.
give a more aggressive training.
When training neural networks where the output values should be at the extremes (usually 0 and 1,
depending on the activation function), a steep activation function can be used (e.g. 1.0).
Expand Down Expand Up @@ -904,7 +904,7 @@ FANN_EXTERNAL fann_type FANN_API fann_get_activation_steepness(struct fann *ann,
The steepness of an activation function says something about how fast the activation function
goes from the minimum to the maximum. A high value for the activation function will also
give a more agressive training.
give a more aggressive training.
When training neural networks where the output values should be at the extremes (usually 0 and 1,
depending on the activation function), a steep activation function can be used (e.g. 1.0).
Expand Down

0 comments on commit dba736f

Please sign in to comment.