-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert the checkpoint model type to TensorFlow Lite or at least saved_model format #162
Comments
Hi @kapilb7 , This code should work for the CXR-2 model, using TensorFlow 1.15: import tensorflow as tf
# Model file paths
output_file = 'model_CXR-2.tflite'
meta_file = 'models/COVID-Net_CXR-2/model.meta'
ckpt_path = 'models/COVID-Net_CXR-2/model'
# Model tensor names
# These work for CXR-2, but may need to be changed for other models
image_tensor_name = 'input_1:0'
softmax_tensor_name = 'norm_dense_2/Softmax:0'
graph = tf.Graph()
with graph.as_default():
sess = tf.Session()
with sess.as_default():
# Load graph and pretrained weights
saver = tf.train.import_meta_graph(meta_file)
saver.restore(sess, ckpt_path)
# Get input/output tensors
image = graph.get_tensor_by_name(image_tensor_name)
output = graph.get_tensor_by_name(softmax_tensor_name)
# Convert the model
converter = tf.lite.TFLiteConverter.from_session(sess, [image], [output])
tflite_model = converter.convert()
# Save the model
with open(output_file, 'wb') as f:
f.write(tflite_model) On my end, the resulting model seems to work correctly using this simple test script with a dummy input: import numpy as np
import tensorflow as tf
# TFLite file path
model_file = 'model_CXR-2.tflite'
# Make model interpreter
interpreter = tf.lite.Interpreter(model_path=model_file)
interpreter.allocate_tensors()
# Get model details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Make dummy input for testing purposes
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]
input_data = np.ones([1, height, width, 3], dtype=np.float32)
# Run model
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
# View results
output_data = interpreter.get_tensor(output_details[0]['index']).squeeze()
print(output_data) |
Hi, I was able to get a .tflite model from your code, but I'm always getting a positive prediction result when I tried inferencing with a sample image. The Inference.py code which did using the checkpoint model type was able to predict correctly.
|
It looks like you haven't normalized the image to the range [0, 1] that the model expects, which explains why the results differ. Try changing this line: img = np.array(img, dtype=np.float32) to: img = np.array(img, dtype=np.float32)/255. |
Cool, that worked. I forgot about that after converting it. 😅 |
Hi, I tried converting and loading the SavedModel format. I could convert it, but when I try loading the model, I'm getting this error: This is the code I ran to convert the checkpoint format to SavedModel:
|
That's odd, your code seems to work for me. This is the code I ran: import tensorflow as tf
# Model paths
output_dir = 'models/COVID-Net_CXR-2/savedModel'
meta_file = 'models/COVID-Net_CXR-2/model.meta'
ckpt_path = 'models/COVID-Net_CXR-2/model'
graph = tf.Graph()
with graph.as_default():
sess = tf.Session()
with sess.as_default():
# Load graph and pretrained weights
saver = tf.train.import_meta_graph(meta_file)
saver.restore(sess, ckpt_path)
# Save model
builder = tf.saved_model.builder.SavedModelBuilder(output_dir)
builder.add_meta_graph_and_variables(sess, [tf.saved_model.TRAINING, tf.saved_model.SERVING], strip_default_attrs=True)
builder.save() |
Hi, can someone help with conversion of the checkpoint model type to ideally, TensorFlow Lite, if not, at least saved_model type. I tried and looked for multiple sources, but kept hitting dead ends...
The text was updated successfully, but these errors were encountered: