Scrapy Tutorial Learning Notes
Scrapy is an application framework for crawling web sites and extracting structured data which can be used for a wide range of useful applications, like data mining, information processing or historical archival.
Even though Scrapy was originally designed for web scraping, it can also be used to extract data using APIs (such as Amazon Associates Web Services) or as a general purpose web crawler.
Install Scrapy in Ubuntu
sudo apt-get install python-dev python-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev
pip install Scrapy
Creating a project
scrapy startproject tutorial
project directory
tutorial/
scrapy.cfg # deploy configuration file
tutorial/ # project's Python module, you'll import your code from here
__init__.py
items.py # project items file
pipelines.py # project pipelines file
settings.py # project settings file
spiders/ # a directory where you'll later put your spiders
__init__.py
...
Defining our Item
Items are containers that will be loaded with the scraped data, they are declared by creating a scrapy.Item
class and defining its attributes as scrapy.Field
objects.
import scrapy
class DmozItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
desc = scrapy.Field()
Spider
Spiders are classes that you define and Scrapy uses to scrape information from a domain (or group of domains).
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
filename = response.url.split("/")[-2] + '.html'
with open(filename, 'wb') as f:
f.write(response.body)
To create a Spider, you must subclass scrapy.Spider and define some attributes:
-
name
: identifies the Spider. It must be unique. -
start_urls
: a list of URLs where the Spider will begin to crawl from. -
parse()
: a method of the spider, which will be called with the downloaded Response object of each start URL. The response is passed to the method as the first and only argument.This method is responsible for parsing the response data and extracting scraped data (as scraped items) and more URLs to follow(as Request objects).
Crawling
scrapy crawl dmoz
Scrapy creates scrapy.Request
objects for each URL in the start_urls
attribute of the Spider, and assigns them the parse
method of the spider as their callback function.
These Requests are scheduled, then executed, and scrapy.http.Response
objects are returned and then fed back to the spider, through the parse()
method.
Extracting the Data
Scrapy uses a mechanism based on XPath
or CSS
expressions called Scrapy Selectors
to extract data from web pages.You can see selectors
as objects that represent nodes in the document structure.
Selectors have four basic methods:
-
xpath()
: returns a list of selectors, each of which represents the nodes selected by the xpath expression given as argument. -
css()
: returns a list of selectors, each of which represents the nodes selected by the CSS expression given as argument. -
extract()
: returns a unicode string with the selected data. -
re()
: returns a list of unicode strings extracted by applying the regular expression given as argument.
scrapy.http.Response
objects has a selector
attribute which is an instance of Selector
class. You can run queries on response
by calling response.selector.xpath()
or response.selector.css()
or response.xpath()
or response.css()
for short.
Using our Item
Item
objects are custom Python dicts; you can access the values of their fields using the standard dict syntax like:
def parse(self, response):
for sel in response.xpath('//ul/li'):
item = DmozItem()
item['title'] = sel.xpath('a/text()').extract()
item['link'] = sel.xpath('a/@href').extract()
item['desc'] = sel.xpath('text()').extract()
yield item
Following links
extract the links for the pages you are interested, follow them and then extract the data you want for all of them.
def parse(self, response):
for href in response.css("ul.directory.dir-col > li > a::attr('href')"):
url = response.urljoin(href.extract())
yield scrapy.Request(url, callback=self.parse_articles_follow_next_page)
def parse_articles_follow_next_page(self, response):
for article in response.xpath("//article"):
item = ArticleItem()
... extract article data here
yield item
next_page = response.css("ul.navigation > li.next-page > a::attr('href')")
if next_page:
url = response.urljoin(next_page[0].extract())
yield scrapy.Request(url, self.parse_articles_follow_next_page)
When you yield a Request in a callback method, Scrapy will schedule that request to be sent and register a callback method to be executed when that request finishes.
Storing the scraped data
scrapy crawl dmoz -o items.json
That will generate an items.json file containing all scraped items, serialized in JSON.If you want to perform more complex things with the scraped items, you can write an Item Pipeline.