A Powerful Spider(Web Crawler) System in Python. TRY IT NOW!
- Write script in python with powerful API
- Python 2&3
- Powerful WebUI with script editor, task monitor, project manager and result viewer
- Javascript pages supported!
- MySQL, MongoDB, SQLite as database backend
- Task priority, retry, periodical, recrawl by age and more
- Distributed architecture
from libs.base_handler import *
class Handler(BaseHandler):
'''
this is a sample handler
'''
@every(minutes=24*60, seconds=0)
def on_start(self):
self.crawl('http://scrapy.org/', callback=self.index_page)
@config(age=10*24*60*60)
def index_page(self, response):
for each in response.doc('a[href^="http://"]').items():
self.crawl(each.attr.href, callback=self.detail_page)
def detail_page(self, response):
return {
"url": response.url,
"title": response.doc('title').text(),
}
- python 2.6, 2.7, 3.3, 3.4
pip install --allow-all-external -r requirements.txt
./run.py
, visit http://localhost:5000/
if you are using ubuntu, try:
apt-get install python python-dev python-distribute python-pip libcurl4-openssl-dev libxml2-dev libxslt1-dev python-lxml
ro install binary packages first.
- as a package
- run.py parameters
- sortable projects list #12
- Postgresql Supported via SQLAlchemy (with the power of SQLAlchemy, pyspider also support Oracle, SQL Server, etc)
- benchmarking
- python3 support
- documents
- pypi release version
- a visual scraping interface like portia
- local mode, loading scripts from file.
- edit script with local vim via WebDAV
- in-browser debugger like Werkzeug
- works as a framework (all components running in one process, no threads)
- shell mode like
scrapy shell
- Use It
- Open Issue, send PR
- User Group
Licensed under the Apache License, Version 2.0