Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dealing with integrated and dedicated graphics cards #873

Open
BryanChrisBrown opened this issue Jan 25, 2025 · 2 comments
Open

Dealing with integrated and dedicated graphics cards #873

BryanChrisBrown opened this issue Jan 25, 2025 · 2 comments

Comments

@BryanChrisBrown
Copy link

Hi all,

This is admittedly more of an implementation question than a spec question.

I’ve been using webcodecs with great success on systems with one GPU, so far both my m1 MacBook and my windows desktop (ryzen cpu + nvidia gpu) are working fantastic, especially with the WebGPU interop which is crucial for my use case, which is decoding videos and running them through a graphics pipeline for Looking Glass holographic displays.

I’ve encountered an interesting problem on windows based laptops which commonly have two graphics cards, one on the cpu (integrated graphics) and a dedicated GPU. In this case I’ve tested both intel/nvidia and amd/nvidia. In both these cases it appears that the dual GPU setup introduces an extra cpu -> GPU -> GPU to CPU copy. This greatly affects the performance when using the resulting video frames with webGPU.

Right now my m1 MacBook is out performing my nvidia RTX 3060 which is a bit unexpected 😅

My main query here is, is there a way to force webcodecs to use a specific GPU on the system? So far I’ve tried adjusting the windows hardware preference for chrome to use the nvidia card. I’ve also tried adjusting the angle backend, changing from Chrome’s default to d3d11in12 has helped a bit but there’s still an extra copy in there.

I’m happy to share an example that demonstrates my current pipeline, but am curious if there’s another flag or checkbox I can look at in the chrome configuration.

I’ve been really enjoying using webCodecs and WebGPU and these new APIs are fantastic for what they unlock! Many thanks to all the contributors!

@padenot
Copy link
Collaborator

padenot commented Jan 27, 2025

There is no way to do so at the moment, via an API.

WebGL handles this like so: https://registry.khronos.org/webgl/specs/latest/1.0/#WebGLContextAttributes, we could imagine having something like this.

WebGPU like so: https://gpuweb.github.io/gpuweb/#enumdef-gpupowerpreference

@BryanChrisBrown
Copy link
Author

Hi Paul,

Thanks for the response here, it'd be great to have it synced to the webGPU device if possible, perhaps passing the device object from a webgpu context would be an interesting approach to keep the two systems in sync. I'd personally find using separate performance flags for WebGPU and WebCodecs to be a bit cumbersome, but I do have a pretty specific use case here.

Right now, I've been able to resolve it with the D3D11on12 angle backend by changing the flag in chrome, which gives me the same performance as a machine that has a single dedicated GPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants