Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve compatibility for mac #11

Closed
wants to merge 2 commits into from
Closed

Improve compatibility for mac #11

wants to merge 2 commits into from

Conversation

ynx0
Copy link

@ynx0 ynx0 commented May 18, 2024

Fixes:

  • make the downloader script work on mac (wget isn't available by default)
  • fix deps so that at least mac gets CPU provider running

I imagine this works on linux as well but haven't tested it out.

onnxruntime-gpu is not available for mac (see platform matrix)

I've tried to get the CoreML provider to work but I'm having trouble getting it all the way through so I haven't included that work here. Help would be appreciated.

makes it work on mac where wget isn't available by default
@ynx0 ynx0 changed the title Update downloader script to use curl instead for mac compat Improve compatibility for mak May 19, 2024
@ynx0 ynx0 changed the title Improve compatibility for mak Improve compatibility for mac May 19, 2024
@ynx0
Copy link
Author

ynx0 commented May 19, 2024

Adding onnxruntime-silicon==1.16.3; sys_platform == 'darwin' to requirements.txt seems to work (caveat: requires macos 14) with the following patch.

 tinyphysics.py | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/tinyphysics.py b/tinyphysics.py
index b016b0a..6200c53 100644
--- a/tinyphysics.py
+++ b/tinyphysics.py
@@ -54,10 +54,15 @@ class TinyPhysicsModel:
     options.intra_op_num_threads = 1
     options.inter_op_num_threads = 1
     options.log_severity_level = 3
-    if 'CUDAExecutionProvider' in ort.get_available_providers():
+    providers = ort.get_available_providers()
+    if 'CUDAExecutionProvider' in providers:
       if debug:
         print("ONNX Runtime is using GPU")
       provider = ('CUDAExecutionProvider', {'cudnn_conv_algo_search': 'DEFAULT'})
+    elif 'CoreMLExecutionProvider' in providers:
+      if debug:
+        print("ONNX Runtime is using CoreMLExecutionProvider")
+      provider = ('CoreMLExecutionProvider', dict())
     else:
       if debug:
         print("ONNX Runtime is using CPU")
--
2.45.1

Only problem is it results in a 0.3 second slowdown 😬

@nuwandavek
Copy link
Contributor

Replaced onnxruntime-gpu requirement with simply onnxruntime. This is fast enough for the reports, and should work well on mac?
Feel free to reopen the PR / make an issue if it does not.

@nuwandavek nuwandavek closed this May 29, 2024
@wtoth
Copy link

wtoth commented Jun 6, 2024

Note to those using macos <14.X you can just downgrade the package to 1.16.0 or lower
onnxruntime-silicon==1.16.0; sys_platform == 'darwin'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants