The data scraping scripts were made by the lovely person Georgi D. Sotirov.
All the scripts do is parse through the Wikipedia data that I made my own script to scrap. Since the data not being fictional isn't a requirement for the project, I hope that this isn't counted as any sort of academic dishonesty.
As to not redistribute the code that someone else has written, I have added the files to git ignore and will instead clone, execute, and then delete the repo during the Makefile that will generate a results_in.csv
(with some differences because of text encoding and OS differences)