-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework synchronization #871
Comments
Hello @SaschaWillems , I was just looking at your shadow mapping example on the master branch and wondered why it was enough with a single shadow map buffer. Found But I also happened to notice this issue and saw that you on the Is it the following block from dependencies[0].srcSubpass = VK_SUBPASS_EXTERNAL;
dependencies[0].dstSubpass = 0;
dependencies[0].srcStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[0].dstStageMask = VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT;
dependencies[0].srcAccessMask = VK_ACCESS_SHADER_READ_BIT;
dependencies[0].dstAccessMask = VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT;
dependencies[0].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT; that prevents the shadow map to be written to before previous frame has finished reading? Similar to how the subpass dependencies for the main rendering pass allows for only using a single depth buffer? (I have not yet fully understood (Thank you for the wonderful examples btw, learning Vulkan would be much harder without them.) (I also realize that this is maybe not the proper place to ask, but could perhaps say that it is coupled to the proper_sync_dynamic_cb branch. Commenting the code about how it works is perhaps a bit much if it assumes one understands VkSubpassDependency fully.) |
Yes, the sub pass dependency handles this. There is no need to duplicate the attachment per frame. I also plan on adding some additional documentation regarding the reworked synchronization. |
On the Rework Synchronization page , the cloning instruction is |
I don't think that makes sense for a branch. That branch is also heavy work-in-progress and people shouldn't use it. |
@SaschaWillems : Should I open a separate ticket for this? Just want to make sure this problem with newly worked synchronization got your attention.
Disabling the overlay by
|
There is no need to report these. The new synchronization is still very much work in progress, and the way the ui overlay is handled will be completely reworked too. |
Now uses proper sync and multiple concurrent frames Better and more consistent naming Additional comments Refs #871
A recent nvidia driver update broke my rendering loop. The change apparently made Anyways, I went looking around how pros were doing synchronization of their rendering loop and naturally came here :). Here is what I gathered: In the Sascha Willems samples there is a wait for the in flight fence right after the Line 533 in 4252b03
The Vulkan sample, on the other hand, waits for the in flight fence right after calling This is a tricky subject, so I am not 100% sure about the above. |
The correct way, as noted in this very issue, is the one used in the Khronos samples. |
Yep. |
The piece of code you are quoting from Sascha Willems' example is Thanks for your input as I am always trying to enhance my understanding of synchronization. |
Yes, my bad for pointing to irrelevant code. The triangle.cpp example is much closer to what the official Vulkan samples do in terms of synchronization. EDIT: here is the relevant code location Vulkan/examples/triangle/triangle.cpp Line 910 in 9c25dad
|
My understanding of The triangle.cpp is not optimal in terms of semaphore usage:
For the above to work you also need to have multiple copies of the resources used when building the command buffer. But I am pretty sure Sascha knows all this and the problem is the shear amount of work it would take to update all samples. |
Here is very good tutorial about swapchains and synchronization : https://www.intel.com/content/www/us/en/developer/articles/training/api-without-secrets-introduction-to-vulkan-part-2.html |
Just a quick note: This is an Issue to track synchronization rework for my sample, not a discussion topic. If you want to further discuss this, feel free to move it elsewhere. |
The current synchronization involves a
vkQueueWaitIdle
after frame submission. While this makes things a lot easier, it is not optimal and shows a bad practice. The samples should be updated to use proper synchronization with per-frame resources where required and then should ditch thevkQueueWaitIdle
.This will be a large updated as it affects all samples. Progress will be tracked in this branch.
Progress can be followed at https://docs.google.com/spreadsheets/d/1qS6eg0zsGRRKKUkz2eRqg0qV_CzSloMH6G2BoA0Ucrc/edit?usp=sharing
Second step will be fixing all the issues reported by the synchronization validation.
The text was updated successfully, but these errors were encountered: