-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Total time difference when training #65
Comments
what's your batch size? Try a batch size of 2-4 on colab gpu. |
I used the same batch size for both, which is batch size 4 |
Can you share a notebook with a minimal reproducible ERROR? |
The thing is, it's just about the total time estimate of training for 300 images is the same or almost the same when training 150 images with batch size of 4. it is not displaying any error
|
@alic-xc This is the weakness of Mask R-CNN algorithm. Training with Mask R-CNN consumes a lot of power. If you want to train faster you will have to make use of a GPU with a bigger capacity. train_maskrcnn.train_model(num_epochs = 300, augmentation=True, layers = "heads" path_trained_models = "mask_rcnn_models") In the train_model function you set the parameter layers to heads. Note: Training the heads of the Mask R-CNN layers may not reach lower validation losses compared to training all the layers. |
Thank you @khanfarhan10 for your contributions. |
Okay, i think i understand it better now. |
Ah, now we know why Training only heads is sometimes required. |
Hi guys,
is there any time difference between training 300 images vs 150 images ?
I tried training 300 images with 55 test using (ecpoh=300) which Google Colab terminate it at 267/300 ,it took me over 7 hours to reach. so i divided the dataset into 2 hoping it will reduces the time by 50% but from what i can see here, it is going by the previous time
which means there is no difference between them, so i'm still tied to be using over 7 hours to train 150 images.
Is there any recommendation or i'm missing something ?
I would appreciate any help.
The text was updated successfully, but these errors were encountered: