Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Support deploy model on specific device id #2925

Open
BradZhone opened this issue Dec 19, 2024 · 1 comment
Open

[Feature] Support deploy model on specific device id #2925

BradZhone opened this issue Dec 19, 2024 · 1 comment

Comments

@BradZhone
Copy link

Motivation

I notice that if I want to deploy a LLM on a GPU device with specific device id, the only way I can do is export CUDA_VISIBLE_DEVICES=xxx.
Is there any other way to set the specific device id of lmdeploy instance? that the tensor from another device id can be addressed on this device, rather than be limited by the id listed in CUDA_VISIBLE_DEVICES?

Related resources

No response

Additional context

No response

@lvhan028
Copy link
Collaborator

currently no others but CUDA_VISIBLE_DEVICES

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants