Skip to content

Commit e611001

Browse files
akshaykaMarkDaoust
authored andcommitted
Delete equivalence between step and learning rate; the two are different concepts.
1 parent e69cb02 commit e611001

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

samples/core/get_started/eager.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -534,7 +534,7 @@
534534
"source": [
535535
"### Create an optimizer\n",
536536
"\n",
537-
"An *[optimizer](https://developers.google.com/machine-learning/crash-course/glossary#optimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradients for each *step* (or *[learning rate](https://developers.google.com/machine-learning/crash-course/glossary#learning_rate)*), we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.\n",
537+
"An *[optimizer](https://developers.google.com/machine-learning/crash-course/glossary#optimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each example, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.\n",
538538
"\n",
539539
"<table>\n",
540540
" <tr><td>\n",

0 commit comments

Comments
 (0)