Skip to content

Commit 7aa4dd0

Browse files
committed
use the int8 type for inference input and output
when quantizing with int8_no_float mode
1 parent 1880949 commit 7aa4dd0

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

tensorflow/lite/python/lite.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -704,8 +704,8 @@ def convert(self):
704704
**converter_kwargs)
705705

706706
if quant_mode.post_training_int8_no_float():
707-
result = self._calibrate_quantize_model(result, constants.FLOAT,
708-
constants.FLOAT, False)
707+
result = self._calibrate_quantize_model(result, constants.INT8,
708+
constants.INT8, False)
709709
elif quant_mode.post_training_int8_allow_float():
710710
result = self._calibrate_quantize_model(result, constants.FLOAT,
711711
constants.FLOAT, True)

0 commit comments

Comments
 (0)