Skip to content

Latest commit

 

History

History
 
 

model-optimizer

Project structure

Project structure:

    |-- root
        |-- extensions
            |-- front/caffe
                |-- CustomLayersMapping.xml.example - example of file for registering custom Caffe layers in 2017R3 public
                manner
        |-- mo
            |-- back - Back-End logic: contains IR emitting logic
            |-- front - Front-End logic: contains matching between Framework-specific layers and IR specific, calculation
            of output shapes for each registered layer
            |-- graph - Graph utilities to work with internal IR representation
            |-- middle - Graph transformations - optimizations of the model
            |-- pipeline - Sequence of steps required to create IR for each framework
            |-- utils - Utility functions
        |-- tf_call_ie_layer - Sources for TensorFlow fallback in Inference Engine during model inference
        |-- mo.py - Centralized entry point that can be used for any supported framework
        |-- mo_caffe.py - Entry point particularly for Caffe
        |-- mo_mxnet.py - Entry point particularly for MXNet
        |-- mo_tf.py - Entry point particularly for TensorFlow
        |-- ModelOptimizer - Entry point particularly for Caffe that contains same CLI as 2017R3 publicly released
        Model Optimizer

Prerequisites

Model Optimizer requires:

  1. Python 3 or newer

  2. [Optional] Please read about use cases that require Caffe available on the machine (:doc:caffe_dependency). Please follow the steps described (:doc:caffe_build).

Installation instructions

  1. Go to the Model Optimizer folder:
    cd PATH_TO_INSTALL_DIR/deployment_tools/model_optimizer/model_optimizer_tensorflow
  1. Create virtual environment and activate it. This option is strongly recommended as it creates a Python sandbox and dependencies for Model Optimizer do not influence global Python configuration, installed libraries etc. At the same time, special flag ensures that system-wide Python libraries are also available in this sandbox. Skip this step only if you do want to install all Model Optimizer dependencies globally:

    • Create environment:
          virtualenv -p /usr/bin/python3.6 .env3 --system-site-packages
        
    • Activate it:
        . .env3/bin/activate
      
  2. Install dependencies. If you want to convert models only from particular framework, you should use one of available requirements_*.txt files corresponding to the framework of choice. For example, for Caffe use requirements_caffe.txt and so on. When you decide to switch later to other frameworks, please install dependencies for them using the same mechanism:

     pip3 install -r requirements.txt
     

Command-Line Interface (CLI)

The following short examples are framework-dependent. Please read the complete help with --help option for details across all frameworks:

    python3 mo.py --help

There are several scripts that convert a model:

  1. mo.py -- universal entry point that can convert a model from any supported framework

  2. mo_caffe.py -- dedicated script for Caffe models conversion

  3. mo_mxnet.py -- dedicated script for MXNet models conversion

  4. mo_tf.py -- dedicated script for TensorFlow models conversion

  5. mo_onnx.py -- dedicated script for ONNX models conversion

  6. mo_kaldi.py -- dedicated script for Kaldi models conversion

mo.py can deduce original framework where input model was trained by an extension of the model file. Or --framework option can be used for this purpose if model files don't have standard extensions (.pb - for TensorFlow models, .params - for MXNet models, .caffemodel - for Caffe models). So, the following commands are equivalent::

    python3 mo.py --input_model /user/models/model.pb
    python3 mo.py --framework tf --input_model /user/models/model.pb

The following examples illustrate the shortest command lines to convert a model per framework.

Convert TensorFlow model

To convert a frozen TensorFlow model contained in binary file model-file.pb, run dedicated entry point mo_tf.py:

python3 mo_tf.py --input_model model-file.pb

Convert Caffe model

To convert a Caffe model contained in model-file.prototxt and model-file.caffemodel run dedicated entry point mo_caffe.py:

    python3 mo_caffe.py --input_model model-file.caffemodel

Convert MXNet model

To Convert an MXNet model in model-file-symbol.json and model-file-0000.params run dedicated entry point mo_mxnet.py:

    python3 mo_mxnet.py --input_model model-file

NOTE: for TensorFlow* all Placeholder ops are represented as Input layers in the final IR.

Convert ONNX* model

The Model Optimizer assumes that you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.

Use the mo_onnx.py script to simply convert a model with the path to the input model .onnx file:

    python3 mo_onnx.py --input_model model-file.onnx

Input channels re-ordering, scaling, subtraction of mean values and other preprocessing features are not applied by default. To pass necessary values to Model Optimizer, please run mo.py (or mo_tf.py, mo_caffe.py, mo_mxnet.py) with --help and examine all available options.

Working with Inference Engine

To the moment, Inference Engine is the only consumer of IR models that Model Optimizer produces. The whole workflow and more documentation on the structure of IR are documented in the Developer Guide of Inference Engine. Note that sections about running Model Optimizer refer to the old version of the tool and can not be applied to the current version of Model Optimizer.

How to run unit-tests

  1. Run tests with:
    python -m unittest discover -p "*_test.py" [-s PATH_TO_DIR]

* Other names and brands may be claimed as the property of others.