-
Hello, how can I automatically fallback unsupported operators to the CPU when running a model with the Vulkan backend in ExecuTorch? This is my CMakeLists.txt for my application to run model on ExecuTorch.
And This is my main.cpp code.
This is the error result. D 00:00:00.000048 executorch:operator_registry.cpp:96] Successfully registered all kernels from shared library: NOT_SUPPORTED And this is my code snippet for export .pte.
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
@SS-JIA can you take a look at the error? |
Beta Was this translation helpful? Give feedback.
-
Unfortunately, there is no way to fall back to the CPU because during export the model has already been converted to a delegate specific IR. However, this is definitely an issue with Vulkan's lowering logic in that it is lowering operators with unsupport input/output dtypes. The easiest fix here is to add support for running In the long term, I will also update Vulkan's lowering logic to avoid lowering ops that have unsupported dtypes as inputs. Apologies for the inconvenience! |
Beta Was this translation helpful? Give feedback.
Unfortunately, there is no way to fall back to the CPU because during export the model has already been converted to a delegate specific IR. However, this is definitely an issue with Vulkan's lowering logic in that it is lowering operators with unsupport input/output dtypes.
The easiest fix here is to add support for running
concat
with integer inputs; I will put up a PR to enable that shortly.In the long term, I will also update Vulkan's lowering logic to avoid lowering ops that have unsupported dtypes as inputs.
Apologies for the inconvenience!
@jaeokchoi