You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice that if I want to deploy a LLM on a GPU device with specific device id, the only way I can do is export CUDA_VISIBLE_DEVICES=xxx.
Is there any other way to set the specific device id of lmdeploy instance? that the tensor from another device id can be addressed on this device, rather than be limited by the id listed in CUDA_VISIBLE_DEVICES?
Related resources
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Motivation
I notice that if I want to deploy a LLM on a GPU device with specific device id, the only way I can do is
export CUDA_VISIBLE_DEVICES=xxx
.Is there any other way to set the specific device id of lmdeploy instance? that the tensor from another device id can be addressed on this device, rather than be limited by the id listed in CUDA_VISIBLE_DEVICES?
Related resources
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: