Skip to content

Commit

Permalink
Update README for 110 to include NNCF notebook (openvinotoolkit#578)
Browse files Browse the repository at this point in the history
  • Loading branch information
helena-intel authored Jun 20, 2022
1 parent df5b9e4 commit 7253366
Showing 1 changed file with 16 additions and 7 deletions.
23 changes: 16 additions & 7 deletions notebooks/110-ct-segmentation-quantize/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,21 +4,30 @@

## What's Inside

This folder contains three notebooks that show how to train,
This folder contains four notebooks that show how to train,
optimize, quantize and show live inference on a [MONAI](https://monai.io/) segmentation model with
[PyTorch Lightning](https://pytorchlightning.ai/) and OpenVINO:

1. [Data Preparation for 2D Segmentation of 3D Medical Data](data-preparation-ct-scan.ipynb)
2. [Train a 2D-UNet Medical Imaging Model with PyTorch Lightning](pytorch-monai-training.ipynb)
3. [Convert and Quantize a UNet Model and Show Live Inference](110-ct-segmentation-quantize.ipynb)

1\. [Data Preparation for 2D Segmentation of 3D Medical Data](data-preparation-ct-scan.ipynb)

We provided a pretrained model and a subset of the dataset for the quantization notebook, so it is not required to run the data preparation and training notebooks before running the quantization tutorial.
2\. [Train a 2D-UNet Medical Imaging Model with PyTorch Lightning](pytorch-monai-training.ipynb)

The quantization tutorial shows how to:
3a. [Convert and Quantize a UNet Model and Show Live Inference using POT](110-ct-segmentation-quantize.ipynb)

3b. [Convert and Quantize a UNet Model and Show Live Inference using NNCF](110-ct-segmentation-quantize-nncf.ipynb)

The main difference between the POT and NNCF quantization notebooks is that
NNCF performs quantization within the PyTorch framework, while POT performs
quantization after the PyTorch model has been converted to OpenVINO IR format.
We provide a pretrained model and a subset of the dataset for the quantization
notebooks, so it is not required to run the data preparation and training
notebooks before running the quantization tutorials.

The quantization tutorials show how to:

- Convert an ONNX model to OpenVINO IR with [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
- Quantize a model with OpenVINO's [Post-Training Optimization Tool](https://docs.openvino.ai/latest/pot_compression_api_README.html) API
- Quantize a model with OpenVINO's [Post-Training Optimization Tool](https://docs.openvino.ai/latest/pot_compression_api_README.html) API or [NNCF](https://github.com/openvinotoolkit/nncf)
- Evaluate the F1 score metric of the original model and the quantized model
- Benchmark performance of the original model and the quantized model
- Show live inference with OpenVINO's async API and MULTI plugin
Expand Down

0 comments on commit 7253366

Please sign in to comment.