Skip to content

A Go-based sitemap crawler for extracting SEO data from websites. This tool automates the discovery and parsing of sitemaps to retrieve URLs and gather key SEO metrics such as page titles, H1 tags, and meta descriptions. It's designed to support concurrency, adhere to robots.txt protocols.

License

Notifications You must be signed in to change notification settings

Dev-29/sitemap-crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sitemap Crawler for SEO (Search Engine Optimization)

A Go-based sitemap crawler that automates the extraction of SEO data from websites. It is designed to efficiently retrieve URLs, page titles, H1 tags, and meta descriptions from the sitemaps of specified websites.

Features

  • Automatic Sitemap Discovery: Discovers and parses sitemaps from a given base URL.
  • SEO Data Extraction: Retrieves crucial SEO metrics such as page titles, H1 tags, and meta descriptions.
  • Concurrency Support: Manages multiple URLs concurrently to speed up the crawling process.
  • robots.txt: Adheres to the directives specified in robots.txt files to ensure compliant web scraping.

Getting Started

Prerequisites

  • Go (Golang) installed on your machine.

Installation

Clone the repository to your local machine:

git clone https://github.com/Dev-29/sitemap-crawler.git
cd sitemap-crawler

Usage

Run the program with the following command:

go run main.go -baseurl "https://example.com/"

About

A Go-based sitemap crawler for extracting SEO data from websites. This tool automates the discovery and parsing of sitemaps to retrieve URLs and gather key SEO metrics such as page titles, H1 tags, and meta descriptions. It's designed to support concurrency, adhere to robots.txt protocols.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages