-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This is an unnecessary question #21
Comments
Hello, thank you for your interest in our work. I'm not quite sure what you mean, if "generating videos from sparse viewpoints" means generating video with a diffusion model directly from sparse viewpoints, and "generating videos with physical properties" means first reconstructing the underlying 3D scene with a representation such as 3DGS, and then synthesis novel views, then both methods have their own pros and cons. The reconstruction method allows for a better understanding of the 3D world and synthesises more consistent views. However, diffusion methods can better hallucinate unseen parts of the scene, which is great. |
Yes, I would like to know: Can reconstruction and video generation be combined? Or can they mutually benefit each other? |
Yes, reconstruction may be used to improve the consistency of the diffusion model, here is an example. The generated multiview can be utilized for reconstitution, such as the Sora example in our paper. |
Thanks for your detailed explanation, which helped me clarify my doubts. |
Your work is excellent, and I have a question: Will the current reconstruction work develop into generating videos from sparse viewpoints? Or is it more focused on generating videos with physical properties?
The text was updated successfully, but these errors were encountered: