You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is admittedly more of an implementation question than a spec question.
I’ve been using webcodecs with great success on systems with one GPU, so far both my m1 MacBook and my windows desktop (ryzen cpu + nvidia gpu) are working fantastic, especially with the WebGPU interop which is crucial for my use case, which is decoding videos and running them through a graphics pipeline for Looking Glass holographic displays.
I’ve encountered an interesting problem on windows based laptops which commonly have two graphics cards, one on the cpu (integrated graphics) and a dedicated GPU. In this case I’ve tested both intel/nvidia and amd/nvidia. In both these cases it appears that the dual GPU setup introduces an extra cpu -> GPU -> GPU to CPU copy. This greatly affects the performance when using the resulting video frames with webGPU.
Right now my m1 MacBook is out performing my nvidia RTX 3060 which is a bit unexpected 😅
My main query here is, is there a way to force webcodecs to use a specific GPU on the system? So far I’ve tried adjusting the windows hardware preference for chrome to use the nvidia card. I’ve also tried adjusting the angle backend, changing from Chrome’s default to d3d11in12 has helped a bit but there’s still an extra copy in there.
I’m happy to share an example that demonstrates my current pipeline, but am curious if there’s another flag or checkbox I can look at in the chrome configuration.
I’ve been really enjoying using webCodecs and WebGPU and these new APIs are fantastic for what they unlock! Many thanks to all the contributors!
The text was updated successfully, but these errors were encountered:
Thanks for the response here, it'd be great to have it synced to the webGPU device if possible, perhaps passing the device object from a webgpu context would be an interesting approach to keep the two systems in sync. I'd personally find using separate performance flags for WebGPU and WebCodecs to be a bit cumbersome, but I do have a pretty specific use case here.
Right now, I've been able to resolve it with the D3D11on12 angle backend by changing the flag in chrome, which gives me the same performance as a machine that has a single dedicated GPU.
Hi all,
This is admittedly more of an implementation question than a spec question.
I’ve been using webcodecs with great success on systems with one GPU, so far both my m1 MacBook and my windows desktop (ryzen cpu + nvidia gpu) are working fantastic, especially with the WebGPU interop which is crucial for my use case, which is decoding videos and running them through a graphics pipeline for Looking Glass holographic displays.
I’ve encountered an interesting problem on windows based laptops which commonly have two graphics cards, one on the cpu (integrated graphics) and a dedicated GPU. In this case I’ve tested both intel/nvidia and amd/nvidia. In both these cases it appears that the dual GPU setup introduces an extra cpu -> GPU -> GPU to CPU copy. This greatly affects the performance when using the resulting video frames with webGPU.
Right now my m1 MacBook is out performing my nvidia RTX 3060 which is a bit unexpected 😅
My main query here is, is there a way to force webcodecs to use a specific GPU on the system? So far I’ve tried adjusting the windows hardware preference for chrome to use the nvidia card. I’ve also tried adjusting the angle backend, changing from Chrome’s default to d3d11in12 has helped a bit but there’s still an extra copy in there.
I’m happy to share an example that demonstrates my current pipeline, but am curious if there’s another flag or checkbox I can look at in the chrome configuration.
I’ve been really enjoying using webCodecs and WebGPU and these new APIs are fantastic for what they unlock! Many thanks to all the contributors!
The text was updated successfully, but these errors were encountered: