#SynthText Code for generating synthetic text images as described in "Synthetic Data for Text Localisation in Natural Images", Ankush Gupta, Andrea Vedaldi, Andrew Zisserman, CVPR 2016.
The library is written in Python. The main dependencies are:
pygame, opencv (cv2), PIL (Image), numpy, matplotlib, h5py, scipy
###Generating samples
python gen.py --viz
This will download a data file (~23M) hosted on Google Drive to the data
directory. This data file includes:
- dset.h5: This is a sample h5 file which contains a set of 5 images along with their depth and segmentation information. Note, this is just given as an example; you are encouraged to add more images (along with their depth and segmentation information) to this database for your own use.
- data/fonts: three sample fonts (add more fonts to this folder and then update
fonts/fontlist.txt
with their paths). - data/newsgroup: Text-source (from the News Group dataset). This can be subsituted with any text file. Look inside
text_utils.py
to see how the text inside this file is used be the renderer. - data/models/colors_new.cp: Color-model (foreground/background text color model), learnt from the IIIT-5K word dataset.
- data/models: Other cPickle files (char_freq: frequency of each character in the text dataset; font_px2pt.cp: conversion from pt to px for various fonts: If you add a new font, make sure that the corresponding model is present in this file, if not you would need to add it/ modify the code).
This script will generate random scene-text image samples and store them in an h5 file in results/SynthText.h5
. If the --viz
option is specified, the generated output will be visualized as the script is being run; omit the --viz
option to turn-off the visualizations. If you want to visualize the results stored in results/SynthText.h5
later, run:
python visualize_results.py
A dataset with approximately 800000 synthetic scene-text images generated with this code can be found here.
Please refer to the paper for more information, or contact me (email address in the paper).