-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference time jetson nano #4
Comments
@jankais3r thanks for the performance results. Looks indeed like the jetson nano 2gb won't be able to do it. What amount of depth levels did you use and what resolution? A gtx1080 would be definitely strong enough for continuous mode? Thanks Sieuwe |
Hi @jankais3r i am trying to use it in jetson xavier nx can you let me know which version of docker you are using as it is recommended 19.03.11 and I am having 20.10.7....while building docker i am having issue... |
Hi, I cannot comment on the Docker approach, as I used Conda environment for my tests. |
What would be expected inference time on a jetson nano? With 4 to 5 different depth levels.
Thanks
Sieuwe
The text was updated successfully, but these errors were encountered: