|
114 | 114 | "source": [
|
115 | 115 | "### Install the latest version of TensorFlow\n",
|
116 | 116 | "\n",
|
117 |
| - "This tutorial uses eager execution, which is available in [TensorFlow 1.7](https://www.tensorflow.org/install/). (You may need to restart the runtime after upgrading.)" |
| 117 | + "This tutorial uses eager execution, which is available in [TensorFlow 1.8](https://www.tensorflow.org/install/). (You may need to restart the runtime after upgrading.)" |
118 | 118 | ]
|
119 | 119 | },
|
120 | 120 | {
|
|
374 | 374 | "train_dataset = train_dataset.batch(32)\n",
|
375 | 375 | "\n",
|
376 | 376 | "# View a single example entry from a batch\n",
|
377 |
| - "features, label = tfe.Iterator(train_dataset).next()\n", |
| 377 | + "features, label = iter(train_dataset).next()\n", |
378 | 378 | "print(\"example features:\", features[0])\n",
|
379 | 379 | "print(\"example label:\", label[0])"
|
380 | 380 | ],
|
|
508 | 508 | "\n",
|
509 | 509 | "\n",
|
510 | 510 | "def grad(model, inputs, targets):\n",
|
511 |
| - " with tfe.GradientTape() as tape:\n", |
| 511 | + " with tf.GradientTape() as tape:\n", |
512 | 512 | " loss_value = loss(model, inputs, targets)\n",
|
513 | 513 | " return tape.gradient(loss_value, model.variables)"
|
514 | 514 | ],
|
|
522 | 522 | },
|
523 | 523 | "cell_type": "markdown",
|
524 | 524 | "source": [
|
525 |
| - "The `grad` function uses the `loss` function and the [tfe.GradientTape](https://www.tensorflow.org/api_docs/python/tf/contrib/eager/GradientTape) to record operations that compute the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)." |
| 525 | + "The `grad` function uses the `loss` function and the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) to record operations that compute the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)." |
526 | 526 | ]
|
527 | 527 | },
|
528 | 528 | {
|
|
614 | 614 | " epoch_accuracy = tfe.metrics.Accuracy()\n",
|
615 | 615 | "\n",
|
616 | 616 | " # Training loop - using batches of 32\n",
|
617 |
| - " for x, y in tfe.Iterator(train_dataset):\n", |
| 617 | + " for x, y in train_dataset:\n", |
618 | 618 | " # Optimize the model\n",
|
619 | 619 | " grads = grad(model, x, y)\n",
|
620 | 620 | " optimizer.apply_gradients(zip(grads, model.variables),\n",
|
|
800 | 800 | "source": [
|
801 | 801 | "test_accuracy = tfe.metrics.Accuracy()\n",
|
802 | 802 | "\n",
|
803 |
| - "for (x, y) in tfe.Iterator(test_dataset):\n", |
| 803 | + "for (x, y) in test_dataset:\n", |
804 | 804 | " prediction = tf.argmax(model(x), axis=1, output_type=tf.int32)\n",
|
805 | 805 | " test_accuracy(prediction, y)\n",
|
806 | 806 | "\n",
|
|
0 commit comments