Skip to content

Commit

Permalink
Added Dockerfile and Docker running documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
PonteIneptique committed Jun 29, 2018
1 parent 01f43a4 commit 6f0723e
Show file tree
Hide file tree
Showing 2 changed files with 61 additions and 0 deletions.
25 changes: 25 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
FROM tensorflow/tensorflow:1.7.0-gpu

# Python version
RUN python -v

# Additional requirements from Tensorflow
RUN apt-get update && apt-get install -y python3 python3-dev

# Clean up Python 3 install
RUN curl -O https://bootstrap.pypa.io/get-pip.py && \
python3 get-pip.py && \
rm get-pip.py

# Install tensorflow 1.7
RUN pip3 install tensorflow-gpu==1.7.0 ipython notebook

# Copy and install TF-CRNN
ADD . /script
WORKDIR /script
RUN python3 setup.py install

# Add an additional sources directory
# You should normalize the filepath in your data
VOLUME /sources
VOLUME /config
36 changes: 36 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,3 +62,39 @@ All dependencies should be installed if you run `python setup.py install` or use
* `tqdm` for progress bars
* `json`

## Running with docker

The `Dockerfile` in the root directory allows you to run the whole program as a Docker Nvidia Tensorflow GPU container. This is potentially helpful
to deal with external dependencies like CUDA and the likes.

You can follow installations processes here :
- docker-ce : [Ubuntu](https://docs.docker.com/install/linux/docker-ce/ubuntu/#os-requirements)
- nvidia-docker : [Ubuntu](https://nvidia.github.io/nvidia-docker/)

Once this is installed, we will need to build the image of the container by doing :

```bash
nvidia-docker build . --tag tf-crnn
```

Our container model is now named `tf-crnn`. We will be able to run it from `nvidia-docker run -it tf-crnn:latest bash` which will open a bash directory exactly where you are. Although, we recommend using

```bash
nvidia-docker run -it -v /absolute/path/to/here/config:./config -v $INPUT_DATA:/sources tf-crnn:latest bash
```
where `$INPUT_DATA` should be replaced by the directory where you have your training and testing data. This will get mounted on the `sources` folder. We propose to mount by default `./config` to the current `./config` directory. Path need to be absolute path. We also recommend to change

```javascript
//...
"output_model_dir" : "/.output/"
```

to

```javascript
//...
"output_model_dir" : "/config/output"
```

- **Do not forget** to rename your training and testing file path, as well as renaming the path to their image by `/sources/.../file{.png,.jpg}`
- **Note :** if you are uncortable with bash, you can always replace bash by `ipython3 notebook`

0 comments on commit 6f0723e

Please sign in to comment.