You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that during training the process consumes lots of CPU memory. For example, if I use 8 workers with batch size of 45 the memory consumption is more than 50GB and it is slowly increasing up to 60GB when my system shuts down.
Has anyone noticed this before? What is the memory intensive part of the training pipeline? Is it related to some inefficient mmcv operation?
I start the training with this command:
CUDA_VISIBLE_DEVICES=0 python tools/train.py --config configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py --work-dir /Fast-BEV/temp
Could it be a problem that I don't use slurm?
The text was updated successfully, but these errors were encountered:
I noticed that during training the process consumes lots of CPU memory. For example, if I use 8 workers with batch size of 45 the memory consumption is more than 50GB and it is slowly increasing up to 60GB when my system shuts down.
Has anyone noticed this before? What is the memory intensive part of the training pipeline? Is it related to some inefficient mmcv operation?
I start the training with this command:
CUDA_VISIBLE_DEVICES=0 python tools/train.py --config configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py --work-dir /Fast-BEV/temp
Could it be a problem that I don't use slurm?
The text was updated successfully, but these errors were encountered: