Skip to content

Commit

Permalink
Typo
Browse files Browse the repository at this point in the history
  • Loading branch information
GraemeMalcolm committed Apr 27, 2021
1 parent ea059e4 commit 88e2265
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion 05a - Deep Neural Networks (PyTorch).ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -395,7 +395,7 @@
"source": [
"## Evaluate model performance\n",
"\n",
"So, is the model any good? The raw accuracy reported from the validation data would seem to indicate that it predicts pretty well; but it's typically useful to dig a little deeper and compare the predictions for each possible class. A common way to visualize the performace of a classification model is to create a *confusion matrix* that shows a crosstab of correct and incorrect predictions for each class."
"So, is the model any good? The raw accuracy reported from the validation data would seem to indicate that it predicts pretty well; but it's typically useful to dig a little deeper and compare the predictions for each possible class. A common way to visualize the performance of a classification model is to create a *confusion matrix* that shows a crosstab of correct and incorrect predictions for each class."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion 05a - Deep Neural Networks (TensorFlow).ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,7 @@
"source": [
"## Evaluate model performance\n",
"\n",
"So, is the model any good? The raw accuracy reported from the validation data would seem to indicate that it predicts pretty well; but it's typically useful to dig a little deeper and compare the predictions for each possible class. A common way to visualize the performace of a classification model is to create a *confusion matrix* that shows a crosstab of correct and incorrect predictions for each class."
"So, is the model any good? The raw accuracy reported from the validation data would seem to indicate that it predicts pretty well; but it's typically useful to dig a little deeper and compare the predictions for each possible class. A common way to visualize the performance of a classification model is to create a *confusion matrix* that shows a crosstab of correct and incorrect predictions for each class."
]
},
{
Expand Down

0 comments on commit 88e2265

Please sign in to comment.