This repo contains the JAVA code for a Multi-Threaded Web Crawler that stores the scraped URL's in an SQLite database. It runs multiple spiders to crawl multiple URL's concurrently and JSoup to parse the scraped URL's.
- Download the code as a .zip or .tar file and unzip in the IDE workspace (I used Eclipse).
- Add the Jsoup and SQLite .jar files to your project classpath so you can import the corresponding libraries.
- Navigate to webCrawler/src/webCrawler/MainTest.java file and run the file to start the web crawling process.
- If you don't have SQLite installed, please follow the installation guide based on your OS and click on the crawler.db file to access the database containing the scraped URL's in the crawler table which contains 3 columns: id, URL, PURL which describe the kind of data being scraped.
- I am working on create an application to take user given seed URL and a max-depth they want to parse. Once the crawler bots complete parsing, the results will be in a downloadable CSV format so users can utilise the dataset for their personal use.