-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproduce the results question + linear warmup cosine annealing #158
Comments
Hi @gassercariad thanks for your interest in our work. If the training result is very bad using a newer PyTorch version please check if the default interpolation mode has been changed. This may affect the training. |
@noahzn Thanks for your answer! I tried training now without changing anything of your repo with the default env of manydepth [torch 1.7] and newer torch [1.13] I did not change anything [except that using .png for KITTI] but the results are way off, is there any suspect for that performance? I just pulled the repo again, and did not change anything, and the results are as follows:- abs_rel | sq_rel | rmse | rmse_log | a1 | a2 | a3 | And as it seems that it still learns somehow.. |
@gassercariad Did you load the pre-trained weights (pre-trained on ImageNet)? |
Thanks for the quick answer! |
I followed up on this, so I think the "weights init" only initalize for the posenet, but I have to get the pretrained weights for Imagenet and load them manually, right? |
Please download ImageNet-pretrained weights and add this |
Hello!
I believe that from other issues as well, and also from my own experience to train your code with new torch versions, the results are really worse. Given that nothing is changed, but these things in the data loader :-
(1) self.interp = Image.ANTIALIAS
(2) transforms.ColorJitter.get_params(
self.brightness, self.contrast, self.saturation, self.hue)
(3) [in trainer.py] next(self.val_iter()) instead of self.val_iter().next()
I really don't think that all of these things have real effect, but I am really curious about the actual reasons why the algorithm (ManyDepth as well) behave way worse in the improved torch versions? Do you have a suggestion or reason for that?
My second question about the effect of linear warmup cosine annealing, and whether you test its effectiviness? Is there a specific reason that you decided to use it? And also the motivation to split the two different optimizers for pose & depth, instead of using just one optimizer (like the predecessors self-supervised?)
I really look forward to your replies and insights, especially about the weird performance in the newer torch versions :)
The text was updated successfully, but these errors were encountered: