Skip to content
This repository has been archived by the owner on Dec 13, 2023. It is now read-only.

Question about RGB Activation Padding #4

Open
JulianKnodt opened this issue Jun 24, 2021 · 2 comments
Open

Question about RGB Activation Padding #4

JulianKnodt opened this issue Jun 24, 2021 · 2 comments

Comments

@JulianKnodt
Copy link

Hi, this is super interesting work that seems to solve a key problem with NeRF.
One thing I noticed when reading through your paper was that you used a modified RGB activation function.
I tried using this padded sigmoid with normal NeRF, and I noticed that it tends to cause background pixels to have non-zero density because they can saturate all the way to black or white, and was wondering if you encountered the same thing? I was looking at the acc image returned from volumetric integration of the weights. I'm not sure if it's significant, but I was wondering whether you compared normal sigmoid to the widened one?

I was also wondering if you experimented with shrinking the range of sigmoid? I tried lowering the range of sigmoid, and that seems to produce much cleaner accs, at the cost of less RGB range, but to a negligible extent.

Thanks!

@jonbarron
Copy link
Contributor

Hey, good questions.

The only reason for the sigmoid "padding" exists it that helped optimization in some corner cases, like when the learning rate is very large. This is because sigmoids saturate for very large or very small inputs, which causes the gradient to go to zero, which can cause optimization to catastrophically fail. Padding means that the model can emit 0 or 1 without saturating the sigmoid and therefore killing the gradients.

Shrinking the range of the sigmoid is probably going to cause problems in some circumstances, because it means that absolute white and absolute black will both be inexpressible for the model. I'd expect this to drive rgb values to the tails of this sigmoid, which will cause dead gradients.

It makes sense to me that playing with this parameter will affect how "clean" the acc images look. I'm personally of the opinion that the performance on the background of those Blender objects isn't super meaningful: When the background is a white void, it's equally "correct" to say that distant regions are empty (in which case NeRF's rendering model will fill it in with white if the flags are set accordingly) or to say that distant regions are occupied with dense white stuff. Both answers will produce equivalently good test-set rendering accuracy. If the model can't easily express rgb=1 (which is the cause for a non-padded sigmoid) then it will say that the background is empty, and if it can easily express rgb=1 then it might fill in the background or might not. Whether that is a good or bad thing depends on the application.

@JulianKnodt
Copy link
Author

Thanks for the thorough answer!

It makes sense that shrunken sigmoid saturates and that will cause dead gradients for color, but my main thinking is that having it be padded allows an equivalence between alpha = 0, rgb != 0, and alpha != 0, rgb = 0, whereas with normal sigmoid it never reaches 0 except in the limit, thus guiding the model to infer alpha = 0. I'm experimenting on the D-NeRF dataset with a shrunken sigmoid, and found this to lead to better results than a widened one because it may learn some view dependent components for fully black space in some views. I suspect for just novel views, either one works equally well.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants