-
Notifications
You must be signed in to change notification settings - Fork 246
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
The docker file was added for NNCF TF (#837)
- The `cpu` and `gpu` folders for NNCF PT were moved from `nncf/docker` to `nncf/docker/torch` - The docker files were updated for NNCF PT - Proxy env variables were removed - The docker file was added for NNCF TF - README.md was updated - Refactoring. - The additional settings `--shm-size=1g` and `--ulimit memlock=-1` were added (see [link](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#sharing-data)) Related to 59465
- Loading branch information
1 parent
84df611
commit 3fc298a
Showing
4 changed files
with
84 additions
and
35 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,30 +1,40 @@ | ||
## Step 1. Install docker | ||
Review the instructions for installation docker [here](https://docs.docker.com/engine/install/ubuntu/) and configure HTTP or HTTPS proxy behavior as [here](https://docs.docker.com/config/daemon/systemd/). | ||
|
||
Review the instructions for installation docker [here](https://docs.docker.com/engine/install/ubuntu/) and configure Docker | ||
to use a proxy server as [here](https://docs.docker.com/network/proxy/#configure-the-docker-client). | ||
|
||
## Step 2. Install nvidia-docker | ||
|
||
*Skip this step if you don't have GPU.* | ||
|
||
Review the instructions for installation docker [here](https://github.com/NVIDIA/nvidia-docker) | ||
Review the instructions for installation docker [here](https://github.com/NVIDIA/nvidia-docker). | ||
|
||
## Step 3. Build image | ||
|
||
In the project folder run in terminal: | ||
``` | ||
sudo docker image build --network=host --build-arg http_proxy=http://example.com:80 --build-arg https_proxy=http://example.com:81 --build-arg | ||
ftp_proxy=http://example.com:80 <PATH_TO_DIR_WITH_DOCKERFILE> | ||
``` | ||
``` | ||
sudo docker image build --network=host <PATH_TO_DIR_WITH_DOCKERFILE> | ||
``` | ||
|
||
Use `--network` to duplicate the network settings of your localhost into context build. | ||
|
||
*Use `--http_proxy` , `--https_proxy`, `--ftp_proxy`, `--network` to duplicate the network settings of your localhost into context build* | ||
|
||
## Step 4. Run container | ||
Run in terminal: | ||
``` | ||
sudo docker run --name <NAME_CONTAINER> --runtime=nvidia -it --network=host --mount type=bind,source=<PATH_TO_DATASETS_ON_HOST>,target=<PATH_TO_DATSETS_IN_CONTAINER> --mount type=bind,source=<PATH_TO_NNCF_HOME_ON_HOST>,target=/home/nncf/ <ID_IMAGE> | ||
sudo docker run \ | ||
-it \ | ||
--name=<CONTAINER_NAME> \ | ||
--runtime=nvidia \ | ||
--network=host \ | ||
--shm-size=1g \ | ||
--ulimit memlock=-1 \ | ||
--mount type=bind,source=<PATH_TO_DATASETS_ON_HOST>,target=<PATH_TO_DATSETS_IN_CONTAINER> \ | ||
--mount type=bind,source=<PATH_TO_NNCF_HOME_ON_HOST>,target=/home/nncf \ | ||
<IMAGE_ID> | ||
``` | ||
|
||
*You should not use `--runtime=nvidia` if you want to use `--cpu-only` mode.* | ||
You should not use `--runtime=nvidia` if you want to use `--cpu-only` mode. | ||
|
||
*Use `--shm-size` to increase the size of the shared memory directory.* | ||
Use `--shm-size` to increase the size of the shared memory directory. | ||
|
||
Now you have a working container and you can run examples. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
FROM nvidia/cuda:11.0.3-cudnn8-runtime-ubuntu20.04 | ||
|
||
RUN apt-get update \ | ||
&& apt-get install -y --no-install-recommends \ | ||
apt-transport-https \ | ||
git \ | ||
&& rm -rf /var/lib/apt/lists/* | ||
|
||
RUN apt-get update \ | ||
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ | ||
build-essential \ | ||
libgl1-mesa-glx \ | ||
libglib2.0-dev \ | ||
wget \ | ||
curl \ | ||
zip \ | ||
unzip \ | ||
nano \ | ||
openssh-server \ | ||
openssh-client \ | ||
sudo \ | ||
python3 \ | ||
python3-dev \ | ||
python3-pip \ | ||
&& cd /usr/bin \ | ||
&& ln -s python3.8 python \ | ||
&& rm -rf /var/lib/apt/lists/* | ||
|
||
RUN pip3 install --upgrade pip \ | ||
&& pip3 install setuptools | ||
|
||
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH} | ||
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64 | ||
|
||
# nvidia-container-runtime | ||
ENV NVIDIA_VISIBLE_DEVICES all | ||
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility | ||
|
||
ENTRYPOINT cd /home/nncf \ | ||
&& python setup.py install --tf \ | ||
&& pip3 install -r examples/tensorflow/requirements.txt \ | ||
&& bash |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters