From 88e22653c3dbcbe4aab7261ad101afe63c80046b Mon Sep 17 00:00:00 2001 From: Graeme Malcolm Date: Tue, 27 Apr 2021 10:16:51 -0700 Subject: [PATCH] Typo --- 05a - Deep Neural Networks (PyTorch).ipynb | 2 +- 05a - Deep Neural Networks (TensorFlow).ipynb | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/05a - Deep Neural Networks (PyTorch).ipynb b/05a - Deep Neural Networks (PyTorch).ipynb index 565db70..f7ee679 100644 --- a/05a - Deep Neural Networks (PyTorch).ipynb +++ b/05a - Deep Neural Networks (PyTorch).ipynb @@ -395,7 +395,7 @@ "source": [ "## Evaluate model performance\n", "\n", - "So, is the model any good? The raw accuracy reported from the validation data would seem to indicate that it predicts pretty well; but it's typically useful to dig a little deeper and compare the predictions for each possible class. A common way to visualize the performace of a classification model is to create a *confusion matrix* that shows a crosstab of correct and incorrect predictions for each class." + "So, is the model any good? The raw accuracy reported from the validation data would seem to indicate that it predicts pretty well; but it's typically useful to dig a little deeper and compare the predictions for each possible class. A common way to visualize the performance of a classification model is to create a *confusion matrix* that shows a crosstab of correct and incorrect predictions for each class." ] }, { diff --git a/05a - Deep Neural Networks (TensorFlow).ipynb b/05a - Deep Neural Networks (TensorFlow).ipynb index baae4cd..7609641 100644 --- a/05a - Deep Neural Networks (TensorFlow).ipynb +++ b/05a - Deep Neural Networks (TensorFlow).ipynb @@ -319,7 +319,7 @@ "source": [ "## Evaluate model performance\n", "\n", - "So, is the model any good? The raw accuracy reported from the validation data would seem to indicate that it predicts pretty well; but it's typically useful to dig a little deeper and compare the predictions for each possible class. A common way to visualize the performace of a classification model is to create a *confusion matrix* that shows a crosstab of correct and incorrect predictions for each class." + "So, is the model any good? The raw accuracy reported from the validation data would seem to indicate that it predicts pretty well; but it's typically useful to dig a little deeper and compare the predictions for each possible class. A common way to visualize the performance of a classification model is to create a *confusion matrix* that shows a crosstab of correct and incorrect predictions for each class." ] }, {