Intel® Low Precision Optimization Tool supports diffrent model formats of TensorFlow 1.x and 2.x.
TensorFlow model format | Supported? | Example | Comments |
---|---|---|---|
frozen pb | Yes | examples/tensorflow/image_recognition, examples/tensorflow/oob_models | |
Graph object | Yes | examples/helloworld/tf1.x, examples/tensorflow/style_transfer, examples/tensorflow/recommendation/wide_deep_large_ds | |
GraphDef object | Yes | ||
tf1.x checkpoint | Yes | examples/tensorflow/object_detection | |
keras.Model object | Yes | examples/helloworld/tf2.x | |
keras saved model | Yes | examples/helloworld/tf2.x | |
tf2.x saved model | TBD | ||
tf2.x h5 format model | TBD | ||
slim checkpoint | TBD | ||
tf1.x saved model | No | No plan to support it | |
tf2.x checkpoint | No | As tf2.x checkpoint only has weight and does not contain any description of the computation, please use different tf2.x model for quantization |
You can directly pass the directory or object to quantizer, for example:
from lpot import Quantization
quantizer = Quantization('./conf.yaml')
dataset = mnist_dataset(mnist.test.images, mnist.test.labels)
data_loader = quantizer.dataloader(dataset=dataset, batch_size=1)
model = frozen_pb/Graph/GraphDef/checkpoint_path/keras.Model/keras_savedmodel_path
q_model = quantizer(frozen_pb, q_dataloader=data_loader, eval_func=eval_func)