Skip to content

Commit c327f2e

Browse files
committed
[samples/core/get_starter/eager]: Update with API simplifications in 1.8
1 parent 52c3312 commit c327f2e

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

samples/core/get_started/eager.ipynb

+6-6
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@
114114
"source": [
115115
"### Install the latest version of TensorFlow\n",
116116
"\n",
117-
"This tutorial uses eager execution, which is available in [TensorFlow 1.7](https://www.tensorflow.org/install/). (You may need to restart the runtime after upgrading.)"
117+
"This tutorial uses eager execution, which is available in [TensorFlow 1.8](https://www.tensorflow.org/install/). (You may need to restart the runtime after upgrading.)"
118118
]
119119
},
120120
{
@@ -374,7 +374,7 @@
374374
"train_dataset = train_dataset.batch(32)\n",
375375
"\n",
376376
"# View a single example entry from a batch\n",
377-
"features, label = tfe.Iterator(train_dataset).next()\n",
377+
"features, label = iter(train_dataset).next()\n",
378378
"print(\"example features:\", features[0])\n",
379379
"print(\"example label:\", label[0])"
380380
],
@@ -508,7 +508,7 @@
508508
"\n",
509509
"\n",
510510
"def grad(model, inputs, targets):\n",
511-
" with tfe.GradientTape() as tape:\n",
511+
" with tf.GradientTape() as tape:\n",
512512
" loss_value = loss(model, inputs, targets)\n",
513513
" return tape.gradient(loss_value, model.variables)"
514514
],
@@ -522,7 +522,7 @@
522522
},
523523
"cell_type": "markdown",
524524
"source": [
525-
"The `grad` function uses the `loss` function and the [tfe.GradientTape](https://www.tensorflow.org/api_docs/python/tf/contrib/eager/GradientTape) to record operations that compute the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)."
525+
"The `grad` function uses the `loss` function and the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) to record operations that compute the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)."
526526
]
527527
},
528528
{
@@ -614,7 +614,7 @@
614614
" epoch_accuracy = tfe.metrics.Accuracy()\n",
615615
"\n",
616616
" # Training loop - using batches of 32\n",
617-
" for x, y in tfe.Iterator(train_dataset):\n",
617+
" for x, y in train_dataset:\n",
618618
" # Optimize the model\n",
619619
" grads = grad(model, x, y)\n",
620620
" optimizer.apply_gradients(zip(grads, model.variables),\n",
@@ -800,7 +800,7 @@
800800
"source": [
801801
"test_accuracy = tfe.metrics.Accuracy()\n",
802802
"\n",
803-
"for (x, y) in tfe.Iterator(test_dataset):\n",
803+
"for (x, y) in test_dataset:\n",
804804
" prediction = tf.argmax(model(x), axis=1, output_type=tf.int32)\n",
805805
" test_accuracy(prediction, y)\n",
806806
"\n",

0 commit comments

Comments
 (0)