Skip to content

Commit 4dcab14

Browse files
committed
Added to the README and fixed a packaging bug
1 parent d0aa280 commit 4dcab14

File tree

2 files changed

+21
-6
lines changed

2 files changed

+21
-6
lines changed

README.md

Lines changed: 20 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,15 @@ scrapy-webdriver
33

44
Scrape using Selenium webdriver.
55

6+
Installation
7+
=============
8+
9+
For now, nothing's on pypi, but this should work:
10+
11+
pip install https://github.com/sosign/scrapy-webdriver/archive/master.zip
12+
613
Configuration
7-
-------------
14+
=============
815

916
Add something like this in your scrapy project settings:
1017

@@ -17,11 +24,19 @@ Add something like this in your scrapy project settings:
1724
'scrapy_webdriver.middlewares.WebdriverSpiderMiddleware': 543,
1825
}
1926

20-
WEBDRIVER_BROWSER = 'PhantomJS'
27+
WEBDRIVER_BROWSER = 'PhantomJS' # Or any other from selenium.webdriver
2128

2229
Usage
23-
-----
30+
=====
31+
32+
In order to have webdriver handle your downloads, use the provided
33+
class `scrapy_webdriver.http.WebdriverRequest` in place of the stock scrapy
34+
`Request`.
35+
36+
Hacking
37+
=======
2438

25-
In order to have webdriver handle your downloads, use the provided class
26-
`scrapy_webdriver.http.WebdriverRequest` in place of the stock scrapy `Request`.
39+
Pull requests much welcome. Just make sure the tests still pass, and add to
40+
them as necessary:
2741

42+
python setup.py test

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ def run_tests(self):
4444
maintainer_email=metadata.emails[0],
4545
url=metadata.url,
4646
description=metadata.description,
47-
long_description=read('README.rst'),
47+
long_description=read('README.md'),
4848
download_url=metadata.url,
4949
classifiers=[
5050
'Development Status :: 3 - Alpha',

0 commit comments

Comments
 (0)