Skip to content

nilavra/vizwiz-captions

Repository files navigation

VizWiz Captions Evaluation

Update 2020-02-27:

Thank you for your interest in VizWiz captions. Active development and maintenance responsibilities of this project has been transferred. Please visit this repository to obtain the latest codes, and for your queries about this project.


Code for the VizWiz API and evaluation of generated captions.

Requirements

  • python 3
  • java 1.8.0 (for caption evaluation)

Files

./

  • demo_vizwiz_caption_evaluation.ipynb (tutorial notebook)

./vizwiz_api

  • vizwiz.py: This file contains the VizWiz API class that can be used to load VizWiz dataset JSON files and analyze them.

./annotation

  • VizWiz_Captions_v1_val.json (VizWiz Captions v1 validation set)
  • Dataset shares the same data format as MS COCO.

./results

  • VizWiz_Fake_Captions.json (an example of fake results for running demo)
  • Dataset shares the same data format as MS COCO.

./vizwiz_eval_cap: The folder where all caption evaluation codes are stored.

  • evals.py: This file includes the VizWizEvalCap class that can be used to evaluate results on VizWiz.
  • tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer
  • bleu: Bleu evalutation codes
  • rouge: Rouge-L evaluation codes
  • cider: CIDEr evaluation codes
  • spice: SPICE evaluation codes

Setup

  • The primary VizWiz API is standalone.
  • For caption evaluation, you will first need to download the Stanford CoreNLP 3.6.0 code and models for use by SPICE. To do this, run: ./get_stanford_models.sh
    • To run shell scripts in Windows, you can setup Windows Subsystem for Linux.
    • The command for Windows will then be bash get_stanford_models.sh
  • Note: SPICE will try to create a cache of parsed sentences in ./vizwiz_eval_cap/spice/cache/. This dramatically speeds up repeated evaluations. The cache directory can be moved by setting 'CACHE_DIR' in ./vizwiz_eval_cap/spice. In the same file, caching can be turned off by removing the '-cache' argument to 'spice_cmd'.

References

Developers

Acknowledgement

This work is closely adapted from MS COCO API and MS COCO Caption Evaluation API.

Original Developers

  • Xinlei Chen (CMU)
  • Hao Fang (University of Washington)
  • Tsung-Yi Lin (Cornell)
  • Ramakrishna Vedantam (Virgina Tech)

Original Acknowledgements

  • David Chiang (University of Norte Dame)
  • Michael Denkowski (CMU)
  • Alexander Rush (Harvard University)

Releases

No releases published

Packages

No packages published