Skip to content

Commit

Permalink
Update for public URLs
Browse files Browse the repository at this point in the history
  • Loading branch information
ashahba committed Dec 11, 2020
1 parent 6e10662 commit 830a670
Show file tree
Hide file tree
Showing 4 changed files with 244 additions and 4 deletions.
120 changes: 120 additions & 0 deletions quickstart/image_recognition/pytorch/resnet50/inference/bf16/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
<!--- 0. Title -->
# ResNet50 BFloat16 inference

<!-- 10. Description -->
## Description

This document has instructions for running ResNet50 BFloat16 inference using
[intel-extension-for-pytorch](https://github.com/intel/intel-extension-for-pytorch).

<!--- 20. Download link -->
## Download link

[pytorch-resnet50-bf16-inference.tar.gz](https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_2_0/pytorch-resnet50-bf16-inference.tar.gz)

<!--- 30. Datasets -->
## Datasets

The [ImageNet](http://www.image-net.org/) validation dataset is used when
testing accuracy. The inference scripts use synthetic data, so no dataset
is needed.

Download and extract the ImageNet2012 dataset from http://www.image-net.org/,
then move validation images to labeled subfolders, using
[the valprep.sh shell script](https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh)

The accuracy script looks for a folder named `val`, so after running the
data prep script, your folder structure should look something like this:

```
imagenet
└── val
├── ILSVRC2012_img_val.tar
├── n01440764
│   ├── ILSVRC2012_val_00000293.JPEG
│   ├── ILSVRC2012_val_00002138.JPEG
│   ├── ILSVRC2012_val_00003014.JPEG
│   ├── ILSVRC2012_val_00006697.JPEG
│   └── ...
└── ...
```
The folder that contains the `val` directory should be set as the
`DATASET_DIR` when running accuracy
(for example: `export DATASET_DIR=/home/<user>/imagenet`).

<!--- 40. Quick Start Scripts -->
## Quick Start Scripts

| Script name | Description |
|-------------|-------------|
| [`bf16_online_inference.sh`](bf16_online_inference.sh) | Runs online inference using synthetic data (batch_size=1). |
| [`bf16_batch_inference.sh`](bf16_batch_inference.sh) | Runs batch inference using synthetic data (batch_size=128). |
| [`bf16_accuracy.sh`](bf16_accuracy.sh) | Measures the model accuracy (batch_size=128). |

These quickstart scripts can be run in different environments:
* [Bare Metal](#bare-metal)
* [Docker](#docker)

<!--- 50. Bare Metal -->
## Bare Metal

To run on bare metal, the following prerequisites must be installed in your environment:
* Python 3
* [intel-extension-for-pytorch](https://github.com/intel/intel-extension-for-pytorch)
* [torchvision==v0.6.1](https://github.com/pytorch/vision/tree/v0.6.1)
* numactl

Download and untar the model package and then run a [quickstart script](#quick-start-scripts).

```
# Optional: to run accuracy script
export DATASET_DIR=<path to the preprocessed imagenet dataset>
# Download and extract the model package
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_2_0/pytorch-resnet50-bf16-inference.tar.gz
tar -xzf pytorch-resnet50-bf16-inference.tar.gz
cd pytorch-resnet50-bf16-inference
bash quickstart/<script name>.sh
```

<!--- 60. Docker -->
## Docker

The model container includes the scripts and libraries needed to run
ResNet50 BFloat16 inference.

To run the accuracy test, you will need
mount a volume and set the `DATASET_DIR` environment variable to point
to the prepped [ImageNet validation dataset](#dataset). The accuracy
script also downloads the pretrained model at runtime, so provide proxy
environment variables, if necessary.

```
DATASET_DIR=<path to the dataset folder>
docker run \
--env DATASET_DIR=${DATASET_DIR} \
--env http_proxy=${http_proxy} \
--env https_proxy=${https_proxy} \
--volume ${DATASET_DIR}:${DATASET_DIR} \
--privileged --init -t \
intel/image-recognition:pytorch-1.5.0-rc3-imz-2.2.0-resnet50-bfloat16-inference \
/bin/bash quickstart/bf16_accuracy.sh
```

Synthetic data is used when running batch or online inference, so no
dataset mount is needed.

```
docker run \
--privileged --init -t \
intel/image-recognition:pytorch-1.5.0-rc3-imz-2.2.0-resnet50-bfloat16-inference \
/bin/bash quickstart/<script name>.sh
```

<!--- 80. License -->
## License

[LICENSE](/LICENSE)

120 changes: 120 additions & 0 deletions quickstart/image_recognition/pytorch/resnet50/inference/fp32/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
<!--- 0. Title -->
# ResNet50 FP32 inference

<!-- 10. Description -->
## Description

This document has instructions for running ResNet50 FP32 inference using
[intel-extension-for-pytorch](https://github.com/intel/intel-extension-for-pytorch).

<!--- 20. Download link -->
## Download link

[pytorch-resnet50-fp32-inference.tar.gz](https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_2_0/pytorch-resnet50-fp32-inference.tar.gz)

<!--- 30. Datasets -->
## Datasets

The [ImageNet](http://www.image-net.org/) validation dataset is used when
testing accuracy. The inference scripts use synthetic data, so no dataset
is needed.

Download and extract the ImageNet2012 dataset from http://www.image-net.org/,
then move validation images to labeled subfolders, using
[the valprep.sh shell script](https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh)

The accuracy script looks for a folder named `val`, so after running the
data prep script, your folder structure should look something like this:

```
imagenet
└── val
├── ILSVRC2012_img_val.tar
├── n01440764
│   ├── ILSVRC2012_val_00000293.JPEG
│   ├── ILSVRC2012_val_00002138.JPEG
│   ├── ILSVRC2012_val_00003014.JPEG
│   ├── ILSVRC2012_val_00006697.JPEG
│   └── ...
└── ...
```
The folder that contains the `val` directory should be set as the
`DATASET_DIR` when running accuracy
(for example: `export DATASET_DIR=/home/<user>/imagenet`).

<!--- 40. Quick Start Scripts -->
## Quick Start Scripts

| Script name | Description |
|-------------|-------------|
| [`fp32_online_inference.sh`](fp32_online_inference.sh) | Runs online inference using synthetic data (batch_size=1). |
| [`fp32_batch_inference.sh`](fp32_batch_inference.sh) | Runs batch inference using synthetic data (batch_size=128). |
| [`fp32_accuracy.sh`](fp32_accuracy.sh) | Measures the model accuracy (batch_size=128). |

These quickstart scripts can be run in different environments:
* [Bare Metal](#bare-metal)
* [Docker](#docker)

<!--- 50. Bare Metal -->
## Bare Metal

To run on bare metal, the following prerequisites must be installed in your environment:
* Python 3
* [intel-extension-for-pytorch](https://github.com/intel/intel-extension-for-pytorch)
* [torchvision==v0.6.1](https://github.com/pytorch/vision/tree/v0.6.1)
* numactl

Download and untar the model package and then run a [quickstart script](#quick-start-scripts).

```
# Optional: to run accuracy script
export DATASET_DIR=<path to the preprocessed imagenet dataset>
# Download and extract the model package
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_2_0/pytorch-resnet50-fp32-inference.tar.gz
tar -xzf pytorch-resnet50-fp32-inference.tar.gz
cd pytorch-resnet50-fp32-inference
bash quickstart/<script name>.sh
```

<!--- 60. Docker -->
## Docker

The model container includes the scripts and libraries needed to run
ResNet50 FP32 inference.

To run the accuracy test, you will need
mount a volume and set the `DATASET_DIR` environment variable to point
to the prepped [ImageNet validation dataset](#dataset). The accuracy
script also downloads the pretrained model at runtime, so provide proxy
environment variables, if necessary.

```
DATASET_DIR=<path to the dataset folder>
docker run \
--env DATASET_DIR=${DATASET_DIR} \
--env http_proxy=${http_proxy} \
--env https_proxy=${https_proxy} \
--volume ${DATASET_DIR}:${DATASET_DIR} \
--privileged --init -t \
intel/image-recognition:pytorch-1.5.0-rc3-imz-2.2.0-resnet50-fp32-inference \
/bin/bash quickstart/fp32_accuracy.sh
```

Synthetic data is used when running batch or online inference, so no
dataset mount is needed.

```
docker run \
--privileged --init -t \
intel/image-recognition:pytorch-1.5.0-rc3-imz-2.2.0-resnet50-fp32-inference \
/bin/bash quickstart/<script name>.sh
```

<!--- 80. License -->
## License

[LICENSE](/LICENSE)

Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,12 @@ slice_sets:
uri: models/quickstart/image_recognition/pytorch/resnet50/inference/bf16/.docs/license.md
name: README.md
text_replace:
<docker image>: 'amr-registry.caas.intel.com/aipg-tf/model-zoo-ci:66-ci-build-pytorch-resnet50-bf16-inference'
<docker image>: 'intel/image-recognition:pytorch-1.5.0-rc3-imz-2.2.0-resnet50-bfloat16-inference'
<mode>: inference
<model name>: ResNet50
<package dir>: pytorch-resnet50-bf16-inference
<package name>: pytorch-resnet50-bf16-inference.tar.gz
<package url>: 'https://ubit-artifactory-or.intel.com/artifactory/aipg-local/aipg-tf/ML-container-build-from-partials/66/pytorch-resnet50-bf16-inference.tar.gz'
<package url>: 'https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_2_0/pytorch-resnet50-bf16-inference.tar.gz'
<precision>: BFloat16
<use case>: image_recognition
uri: models/quickstart/image_recognition/pytorch/resnet50/inference/bf16
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,12 @@ slice_sets:
uri: models/quickstart/image_recognition/pytorch/resnet50/inference/fp32/.docs/license.md
name: README.md
text_replace:
<docker image>: 'amr-registry.caas.intel.com/aipg-tf/model-zoo-ci:66-ci-build-pytorch-resnet50-fp32-inference'
<docker image>: 'intel/image-recognition:pytorch-1.5.0-rc3-imz-2.2.0-resnet50-fp32-inference'
<mode>: inference
<model name>: ResNet50
<package dir>: pytorch-resnet50-fp32-inference
<package name>: pytorch-resnet50-fp32-inference.tar.gz
<package url>: 'https://ubit-artifactory-or.intel.com/artifactory/aipg-local/aipg-tf/ML-container-build-from-partials/66/pytorch-resnet50-fp32-inference.tar.gz'
<package url>: 'https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_2_0/pytorch-resnet50-fp32-inference.tar.gz'
<precision>: FP32
<use case>: image_recognition
uri: models/quickstart/image_recognition/pytorch/resnet50/inference/fp32
Expand Down

0 comments on commit 830a670

Please sign in to comment.