Skip to content

Commit

Permalink
TensorFlow: Upstream latest commits to git.
Browse files Browse the repository at this point in the history
Changes:
- Updates to Documentation, README.md, installation
  instructions, anchor links, etc.

- Adds Readme for embedding directory.

Base CL: 107308461
  • Loading branch information
Vijay Vasudevan committed Nov 7, 2015
1 parent 6ec6362 commit ca4cee0
Show file tree
Hide file tree
Showing 21 changed files with 242 additions and 94 deletions.
17 changes: 15 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,21 @@ organization for the purposes of conducting machine learning and deep neural
networks research. The system is general enough to be applicable in a wide
variety of other domains, as well.


**Note: Currently we do not accept pull requests on github -- see
[CONTRIBUTING.md](CONTRIBUTING.md) for information on how to contribute code
changes to TensorFlow through
[tensorflow.googlesource.com](https://tensorflow.googlesource.com/tensorflow)**

**We use [github issues](https://github.com/tensorflow/tensorflow/issues) for
tracking requests and bugs, but please see
[Community](resources/index.md#community) for general questions and
discussion.**

# Download and Setup

For detailed installation instructions, see
To install TensorFlow using a binary package, see the instructions below. For
more detailed installation instructions, including installing from source, see
[here](tensorflow/g3doc/get_started/os_setup.md).

## Binary Installation
Expand All @@ -32,7 +44,8 @@ Install TensorFlow:
# For CPU-only version
$ sudo pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl

# For GPU-enabled version
# For GPU-enabled version. See detailed install instructions
# for GPU configuration information.
$ sudo pip install https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl
```

Expand Down
73 changes: 73 additions & 0 deletions tensorflow/g3doc/api_docs/python/client.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
## Contents
* [Session management](#AUTOGENERATED-session-management)
* [class tf.Session](#Session)
* [class tf.InteractiveSession](#InteractiveSession)
* [tf.get_default_session()](#get_default_session)
* [Error classes](#AUTOGENERATED-error-classes)
* [class tf.OpError](#OpError)
Expand Down Expand Up @@ -257,6 +258,78 @@ thread's function.



- - -

### class tf.InteractiveSession <div class="md-anchor" id="InteractiveSession">{#InteractiveSession}</div>

A TensorFlow `Session` for use in interactive contexts, such as a shell.

The only difference with a regular `Session` is that an `InteractiveSession`
installs itself as the default session on construction.
The methods [`Tensor.eval()`](framework.md#Tensor.eval) and
[`Operation.run()`](framework.md#Operation.run) will use that session
to run ops.

This is convenient in interactive shells and [IPython
notebooks](http://ipython.org), as it avoids having to pass an explicit
`Session` object to run ops.

For example:

```python
sess = tf.InteractiveSession()
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# We can just use 'c.eval()' without passing 'sess'
print c.eval()
sess.close()
```

Note that a regular session installs itself as the default session when it
is created in a `with` statement. The common usage in non-interactive
programs is to follow that pattern:

```python
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
with tf.Session():
# We can also use 'c.eval()' here.
print c.eval()
```

- - -

#### tf.InteractiveSession.__init__(target='', graph=None) {#InteractiveSession.__init__}

Creates a new interactive TensorFlow session.

If no `graph` argument is specified when constructing the session,
the default graph will be launched in the session. If you are
using more than one graph (created with `tf.Graph()` in the same
process, you will have to use different sessions for each graph,
but each graph can be used in multiple sessions. In this case, it
is often clearer to pass the graph to be launched explicitly to
the session constructor.

##### Args:


* <b>target</b>: (Optional.) The execution engine to connect to.
Defaults to using an in-process engine. At present, no value
other than the empty string is supported.
* <b>graph</b>: (Optional.) The `Graph` to be launched (described above).


- - -

#### tf.InteractiveSession.close() {#InteractiveSession.close}

Closes an `InteractiveSession`.




- - -

Expand Down
1 change: 1 addition & 0 deletions tensorflow/g3doc/api_docs/python/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -299,6 +299,7 @@
* [`DeadlineExceededError`](client.md#DeadlineExceededError)
* [`FailedPreconditionError`](client.md#FailedPreconditionError)
* [`get_default_session`](client.md#get_default_session)
* [`InteractiveSession`](client.md#InteractiveSession)
* [`InternalError`](client.md#InternalError)
* [`InvalidArgumentError`](client.md#InvalidArgumentError)
* [`NotFoundError`](client.md#NotFoundError)
Expand Down
4 changes: 2 additions & 2 deletions tensorflow/g3doc/get_started/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,10 @@ suggest skimming blue, then red.

<div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px; display: flex; flex-direction: row">
<a href="../tutorials/mnist/beginners/index.md">
<img style="flex-grow:1; flex-shrink:1;border: 1px solid black;" src="./blue_pill.png">
<img style="flex-grow:1; flex-shrink:1; border: 1px solid black;" src="blue_pill.png">
</a>
<a href="../tutorials/mnist/pros/index.md">
<img style="flex-grow:1; flex-shrink:1; border: 1px solid black;" src="./red_pill.png">
<img style="flex-grow:1; flex-shrink:1; border: 1px solid black;" src="red_pill.png">
</a>
</div>
<p style="font-size:10px;">Images licensed CC BY-SA 4.0; original by W. Carter</p>
Expand Down
34 changes: 4 additions & 30 deletions tensorflow/g3doc/get_started/os_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,35 +88,8 @@ ImportError: libcudart.so.7.0: cannot open shared object file: No such file or d
you most likely need to set your `LD_LIBRARY_PATH` to point to the location of
your CUDA libraries.

### Train the MNIST neural net model

```sh
$ python tensorflow/models/image/mnist/convolutional.py
Succesfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Succesfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Succesfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Succesfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
can't determine number of CPU cores: assuming 4
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op
parallelism threads: 3
can't determine number of CPU cores: assuming 4
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op
parallelism threads: 4
Initialized!
Epoch 0.00
Minibatch loss: 12.054, learning rate: 0.010000
Minibatch error: 90.6%
Validation error: 84.6%
...
...

```

## Installing from sources {#source}
<a name="source"></a>
## Installing from sources

### Clone the TensorFlow repository

Expand Down Expand Up @@ -260,7 +233,8 @@ Notes : You need to install
Follow installation instructions [here](http://docs.scipy.org/doc/numpy/user/install.html).


### Create the pip package and install {#create-pip}
<a name="create-pip"></a>
### Create the pip package and install

```sh
$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
Expand Down
30 changes: 17 additions & 13 deletions tensorflow/g3doc/how_tos/adding_an_op/index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Adding a New Op to TensorFlow
# Adding a New Op

PREREQUISITES:

Expand Down Expand Up @@ -27,27 +27,28 @@ to:

<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
* [Define the Op's interface](#define_interface)
* [Define the Op's interface](#AUTOGENERATED-define-the-op-s-interface)
* [Implement the kernel for the Op](#AUTOGENERATED-implement-the-kernel-for-the-op)
* [Generate the client wrapper](#AUTOGENERATED-generate-the-client-wrapper)
* [The Python Op wrapper](#AUTOGENERATED-the-python-op-wrapper)
* [The C++ Op wrapper](#AUTOGENERATED-the-c---op-wrapper)
* [Verify it works](#AUTOGENERATED-verify-it-works)
* [Validation](#validation)
* [Validation](#AUTOGENERATED-validation)
* [Op registration](#AUTOGENERATED-op-registration)
* [Attrs](#AUTOGENERATED-attrs)
* [Attr types](#AUTOGENERATED-attr-types)
* [Polymorphism](#polymorphism)
* [Polymorphism](#AUTOGENERATED-polymorphism)
* [Inputs and Outputs](#AUTOGENERATED-inputs-and-outputs)
* [Backwards compatibility](#AUTOGENERATED-backwards-compatibility)
* [GPU Support](#mult-archs)
* [GPU Support](#AUTOGENERATED-gpu-support)
* [Implement the gradient in Python](#AUTOGENERATED-implement-the-gradient-in-python)
* [Implement a shape function in Python](#AUTOGENERATED-implement-a-shape-function-in-python)


<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->

## Define the Op's interface <div class="md-anchor" id="define_interface">{#define_interface}</div>
<a name="define_interface"></a>
## Define the Op's interface <div class="md-anchor" id="AUTOGENERATED-define-the-op-s-interface">{#AUTOGENERATED-define-the-op-s-interface}</div>

You define the interface of an Op by registering it with the TensorFlow system.
In the registration, you specify the name of your Op, its inputs (types and
Expand Down Expand Up @@ -210,7 +211,7 @@ Then run your test:
$ bazel test tensorflow/python:zero_out_op_test
```

## Validation <div class="md-anchor" id="validation">{#validation}</div>
## Validation <div class="md-anchor" id="AUTOGENERATED-validation">{#AUTOGENERATED-validation}</div>

The example above assumed that the Op applied to a tensor of any shape. What
if it only applied to vectors? That means adding a check to the above OpKernel
Expand Down Expand Up @@ -445,8 +446,8 @@ REGISTER_OP("AttrDefaultExampleForAllTypes")
Note in particular that the values of type `type` use [the `DT_*` names
for the types](../../resources/dims_types.md#data-types).
### Polymorphism <div class="md-anchor" id="polymorphism">{#polymorphism}</div>
#### Type Polymorphism {#type-polymorphism}
### Polymorphism <div class="md-anchor" id="AUTOGENERATED-polymorphism">{#AUTOGENERATED-polymorphism}</div>
#### Type Polymorphism
For ops that can take different types as input or produce different output
types, you can specify [an attr](#attrs) in
Expand All @@ -466,7 +467,8 @@ REGISTER\_OP("ZeroOut")
Your Op registration now specifies that the input's type must be `float`, or
`int32`, and that its output will be the same type, since both have type `T`.
> A note on naming:{#naming} Inputs, outputs, and attrs generally should be
<a name="naming"></a>
> A note on naming: Inputs, outputs, and attrs generally should be
> given snake_case names. The one exception is attrs that are used as the type
> of an input or in the type of an input. Those attrs can be inferred when the
> op is added to the graph and so don't appear in the op's function. For
Expand All @@ -482,7 +484,7 @@ Your Op registration now specifies that the input's type must be `float`, or
> name: A name for the operation (optional).
>
> Returns:
> A `Tensor`. Has the same type as `x`.
> A `Tensor`. Has the same type as `to_zero`.
> """
> ```
>
Expand Down Expand Up @@ -674,7 +676,8 @@ TF_CALL_REAL_NUMBER_TYPES(REGISTER_KERNEL);
#undef REGISTER_KERNEL
```
#### List Inputs and Outputs {#list-input-output}
<a name="list-input-output"></a>
#### List Inputs and Outputs
In addition to being able to accept or produce different types, ops can consume
or produce a variable number of tensors.
Expand Down Expand Up @@ -894,7 +897,8 @@ There are several ways to preserve backwards-compatibility.
If you cannot make your change to an operation backwards compatible, then
create a new operation with a new name with the new semantics.
## GPU Support <div class="md-anchor" id="mult-archs">{#mult-archs}</div>
<a name="mult-archs"></a>
## GPU Support <div class="md-anchor" id="AUTOGENERATED-gpu-support">{#AUTOGENERATED-gpu-support}</div>
You can implement different OpKernels and register one for CPU and another for
GPU, just like you can [register kernels for different types](#polymorphism).
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/g3doc/how_tos/graph_viz/index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# TensorBoard: Visualizing Your Graph
# TensorBoard: Graph Visualization

TensorFlow computation graphs are powerful but complicated. The graph visualization can help you understand and debug them. Here's an example of the visualization at work.

Expand Down
8 changes: 4 additions & 4 deletions tensorflow/g3doc/how_tos/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ example.
[View Tutorial](../tutorials/mnist/tf/index.md)


## TensorBoard: Visualizing Your Training
## TensorBoard: Visualizing Learning

TensorBoard is a useful tool for visualizing the training and evaluation of
your model(s). This tutorial describes how to build and run TensorBoard as well
Expand All @@ -28,7 +28,7 @@ TensorBoard uses for display.
[View Tutorial](summaries_and_tensorboard/index.md)


## TensorBoard: Visualizing Your Graph
## TensorBoard: Graph Visualization

This tutorial describes how to use the graph visualizer in TensorBoard to help
you understand the dataflow graph and debug it.
Expand Down Expand Up @@ -60,15 +60,15 @@ compose in your graph, but here are the details of how to add you own custom Op.
[View Tutorial](adding_an_op/index.md)


## New Data Formats
## Custom Data Readers

If you have a sizable custom data set, you may want to consider extending
TensorFlow to read your data directly in it's native format. Here's how.

[View Tutorial](new_data_formats/index.md)


## Using One or More GPUs
## Using GPUs

This tutorial describes how to construct and execute models on GPU(s).

Expand Down
2 changes: 1 addition & 1 deletion tensorflow/g3doc/how_tos/new_data_formats/index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Extending TF: Supporting new data formats
# Custom Data Readers

PREREQUISITES:

Expand Down
9 changes: 5 additions & 4 deletions tensorflow/g3doc/how_tos/reading_data/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,13 @@ There are three main methods of getting data into a TensorFlow program:

<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
* [Feeding](#Feeding)
* [Feeding](#AUTOGENERATED-feeding)
* [Reading from files](#AUTOGENERATED-reading-from-files)
* [Filenames, shuffling, and epoch limits](#AUTOGENERATED-filenames--shuffling--and-epoch-limits)
* [File formats](#AUTOGENERATED-file-formats)
* [Preprocessing](#AUTOGENERATED-preprocessing)
* [Batching](#AUTOGENERATED-batching)
* [Creating threads to prefetch using `QueueRunner` objects](#QueueRunner)
* [Creating threads to prefetch using `QueueRunner` objects](#AUTOGENERATED-creating-threads-to-prefetch-using--queuerunner--objects)
* [Filtering records or producing multiple examples per record](#AUTOGENERATED-filtering-records-or-producing-multiple-examples-per-record)
* [Sparse input data](#AUTOGENERATED-sparse-input-data)
* [Preloaded data](#AUTOGENERATED-preloaded-data)
Expand All @@ -25,7 +25,7 @@ There are three main methods of getting data into a TensorFlow program:

<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->

## Feeding <div class="md-anchor" id="Feeding">{#Feeding}</div>
## Feeding <div class="md-anchor" id="AUTOGENERATED-feeding">{#AUTOGENERATED-feeding}</div>

TensorFlow's feed mechanism lets you inject data into any Tensor in a
computation graph. A python computation can thus feed data directly into the
Expand Down Expand Up @@ -267,7 +267,8 @@ summary to the graph that indicates how full the example queue is. If you have
enough reading threads, that summary will stay above zero. You can
[view your summaries as training progresses using TensorBoard](../summaries_and_tensorboard/index.md).

### Creating threads to prefetch using `QueueRunner` objects <div class="md-anchor" id="QueueRunner">{#QueueRunner}</div>
<a name="QueueRunner"></a>
### Creating threads to prefetch using `QueueRunner` objects <div class="md-anchor" id="AUTOGENERATED-creating-threads-to-prefetch-using--queuerunner--objects">{#AUTOGENERATED-creating-threads-to-prefetch-using--queuerunner--objects}</div>

The short version: many of the `tf.train` functions listed above add
[`QueueRunner`](../../api_docs/python/train.md#QueueRunner) objects to your
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# TensorBoard: Visualizing Your Training
# TensorBoard: Visualizing Learning

The computations you'll use TensorBoard for - like training a massive
deep neural network - can be complex and confusing. To make it easier to
Expand Down
4 changes: 2 additions & 2 deletions tensorflow/g3doc/resources/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ implementation can be found in out white paper:
### Citation

If you use TensorFlow in your research and would like to cite the TensorFlow
system, we suggest you cite the paper above. You can use this [BibTeX
entry](bib.md). As the project progresses, we
system, we suggest you cite the paper above.
You can use this [BibTeX entry](bib.md). As the project progresses, we
may update the suggested citation with new papers.


Expand Down
Loading

0 comments on commit ca4cee0

Please sign in to comment.