Skip to content

Commit

Permalink
[DOCS] Fix missing snippet and line (openvinotoolkit#17292)
Browse files Browse the repository at this point in the history
* fix snippet path

* fix indent
  • Loading branch information
tsavina authored Apr 28, 2023
1 parent 058c070 commit 62ed45b
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 4 deletions.
5 changes: 3 additions & 2 deletions docs/MO_DG/prepare_model/MO_Python_API.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@

Model Optimizer (MO) has a Python API for model conversion, which is represented by the ``convert_model()`` method in the openvino.tools.mo namespace.

``convert_model()`` has all the functionality available from the command-line tool, plus the ability to pass Python model objects, such as a Pytorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO.
``convert_model()`` returns an openvino.runtime.Model object which can be compiled and inferred or serialized to IR.
``convert_model()`` has all the functionality available from the command-line tool, plus the ability to pass Python model objects, such as a Pytorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO.

``convert_model()`` returns an openvino.runtime.Model object which can be compiled and inferred or serialized to IR.

Example of converting a PyTorch model directly from memory:

Expand Down
4 changes: 2 additions & 2 deletions docs/OV_Runtime_UG/supported_plugins/CPU.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,8 +108,8 @@ to query ``ov::device::capabilities`` property, which should contain ``BF16`` in
If the model has been converted to ``bf16``, the ``ov::hint::inference_precision`` is set to ``ov::element::bf16`` and can be checked via
the ``ov::CompiledModel::get_property`` call. The code below demonstrates how to get the element type:

.. doxygensnippet:: snippets/cpu/Bfloat16Inference1.cpp
:language: py
.. doxygensnippet:: docs/snippets/cpu/Bfloat16Inference1.cpp
:language: cpp
:fragment: [part1]

To infer the model in ``f32`` precision instead of ``bf16`` on targets with native ``bf16`` support, set the ``ov::hint::inference_precision`` to ``ov::element::f32``.
Expand Down

0 comments on commit 62ed45b

Please sign in to comment.