Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: how best to set the epoch dependent parameters #9

Open
nikky4D opened this issue Dec 17, 2024 · 1 comment
Open

Question: how best to set the epoch dependent parameters #9

nikky4D opened this issue Dec 17, 2024 · 1 comment

Comments

@nikky4D
Copy link

nikky4D commented Dec 17, 2024

For a new max epoch run, is there any recommendation for setting epoch related values like: flat_epoch, no_aug_epoch, policy epoch, mixup epoch and stop epoch?

## Our LR-Scheduler
flat_epoch: 29 # 29    # 4 + epoch // 2, e.g., 40 = 4 + 72 / 2
no_aug_epoch: 8 #25 # 8

train_dataloader: 
  dataset: 
    transforms:
      policy:
        epoch: [4, 29, 50] #[15, 76, 142] #[4, 29, 50]   # list 

  collate_fn:
    mixup_epochs: [4, 29] #[15, 76] #[4, 29]
    stop_epoch: 50
@ShihuaHuang95
Copy link
Owner

Thank you very much for your interest and attention to our work. If you like what we’ve done, please consider giving this repo a star.

The issue you mentioned is actually quite simple. Assuming the total training epochs are 50:

The first 4 epochs are for data augmentation warmup.
Half of the epochs are used for dense one-to-one (o2o), i.e., 50/2 = 25.
The flat epoch is the sum of data augmentation warmup and dense o2o, i.e., 25 + 4 = 29.
In D-FINE, 8 epochs are used for searching EMA, so we follow this setting. As a result, no_aug_epoch = 8, and stop_epoch = 50.
The policy is epochs@[4, 29, 50].
I hope this helps you better understand our work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants