A script to automate (pre-) processing of the xml corpus for the course "Praxis der DH". Features include:
- whitespace fixes
- removal of tags
- tokenization (save as reference txt or inside of xml)
- extraction of document body as plain text
Just download this script. Then install its dependencies nltk and lxml. You can install these libraries via pip.
pip install lxml
pip install nltk
Afterwards download the german language data for nltk to a folder 'nltk_data' in the directory, where tokenizer.py resides.
You can start it on the console like this:
python3 tokenizer.py
View its usage notes by specifying the --help parameter:
python3 tokenizer.py --help