Skip to content

Commit

Permalink
updates
Browse files Browse the repository at this point in the history
  • Loading branch information
okankop committed Jul 1, 2020
1 parent 7a7e286 commit f9251e1
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 8 deletions.
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,6 @@ Pretrained models can be downloaded from [here](https://www.dropbox.com/sh/16jv2
All materials (annotations and pretrained models) are also available in Baiduyun Disk:
[here](https://pan.baidu.com/s/1yaOYqzcEx96z9gAkOhMnvQ) with password 95mm

***NOTE:*** With some extra tricks, YOWO achieves 87.5% and 76.7% frame_mAP for UCF101-24 and J-HMDB-21 datasets, respectively.

## Running the code

* Example training bash scripts are provided in 'run_ucf101-24.sh' and 'run_jhmdb-21.sh'.
Expand Down Expand Up @@ -99,6 +97,11 @@ python evaluation/Object-Detection-Metrics/pascalvoc.py --gtfolder PATH-TO-GROUN
* For video_mAP, 'run_video_mAP_ucf.sh' and 'run_video_mAP_jhmdb.sh' bash scripts can be used.


***UPDATEs:***
* We have found a bug in our evaluation for calculating frame-mAP on UCF101-24 dataset (video-mAP results are same as before). We have fixed it, but the frame-mAP results for UCF101-24 are lower than before (if LFB are not used).
* We have used freezing both 2D-CNN and 3D-CNN backbones for all models at the trainings of J-HMDB-21 dataset. This improved the performence considerable, especially for models having resource efficient 3D-CNN backbones.
* We have implemented [Long-Term Feature Bank (LFB)](https://arxiv.org/pdf/1812.05038.pdf). Details can be found in the paper. It brings considerable improvement to UCF101-24 dataset and marginal improvement to J-HMDB-21 dataset. YOWO+LBF achieves 87.3% and 75.7% frame_mAP for UCF101-24 and J-HMDB-21 datasets, respectively.

### Citation
If you use this code or pre-trained models, please cite the following:

Expand Down
4 changes: 2 additions & 2 deletions evaluation/Object-Detection-Metrics/pascalvoc.py
Original file line number Diff line number Diff line change
Expand Up @@ -207,15 +207,15 @@ def getBoundingBoxes(directory,
'-gtformat',
dest='gtFormat',
metavar='',
default='xywh',
default='xyrb',
help='format of the coordinates of the ground truth bounding boxes: '
'(\'xywh\': <left> <top> <width> <height>)'
' or (\'xyrb\': <left> <top> <right> <bottom>)')
parser.add_argument(
'-detformat',
dest='detFormat',
metavar='',
default='xywh',
default='xyrb',
help='format of the coordinates of the detected bounding boxes '
'(\'xywh\': <left> <top> <width> <height>) '
'or (\'xyrb\': <left> <top> <right> <bottom>)')
Expand Down
6 changes: 3 additions & 3 deletions train.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@
print("===================================================================")
print('loading checkpoint {}'.format(opt.resume_path))
checkpoint = torch.load(opt.resume_path)
opt.begin_epoch = checkpoint['epoch']
opt.begin_epoch = checkpoint['epoch'] + 1
best_fscore = checkpoint['fscore']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
Expand Down Expand Up @@ -241,14 +241,14 @@ def truths_length(truths):
detection_path = os.path.join('ucf_detections', 'detections_'+str(epoch), frame_idx[i])
current_dir = os.path.join('ucf_detections', 'detections_'+str(epoch))
if not os.path.exists('ucf_detections'):
os.mkdir(current_dir)
os.mkdir('ucf_detections')
if not os.path.exists(current_dir):
os.mkdir(current_dir)
else:
detection_path = os.path.join('jhmdb_detections', 'detections_'+str(epoch), frame_idx[i])
current_dir = os.path.join('jhmdb_detections', 'detections_'+str(epoch))
if not os.path.exists('jhmdb_detections'):
os.mkdir(current_dir)
os.mkdir('jhmdb_detections')
if not os.path.exists(current_dir):
os.mkdir(current_dir)

Expand Down
2 changes: 1 addition & 1 deletion video_mAP.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@


# Create model
model = YOWO(opt)
model = YOWO(opt)
model = model.cuda()
model = nn.DataParallel(model, device_ids=None) # in multi-gpu case
print(model)
Expand Down

0 comments on commit f9251e1

Please sign in to comment.