Scrapy是用纯python实现的一个为了爬取网站数据,提取结构性数据而编写的应用框架,用途非常广泛。用户只需要定制开发几个模块就可以轻松的实现一个爬虫,用来抓取网页内容以及各种图片,非常方便。
Scrapy使用了Twisted异步网络框架来处理网络通讯,可以加快我们的下载速度,不用自己去实现异步框架,并且包含了各种中间件接口,可以灵活的完成各种需求。
注:只有当调度器中不存在任何request了,整个程序才会停止,也就是说,对于下载失败的URL,Scrapy也会重新下载。
x-special/nautilus-clipboard
copy
file:///home/kid/Documents/teacher_note/day11/scrapy_process.png
spider类定义了如何爬取某个或某些网站。包括了爬取的动作(例如:是否跟进链接)以及如何从网页的内容中提取结构化数据(爬取item)。换句话说,spider就是你定义爬取的动作及分析某个网页的地方。Scrapy.spider是最基本的类,所有编写的爬虫必须继承这个类。
_init_():初始化爬虫名字和start_urls列表
start_requests():调用make_requests_form_url()生成requests对象交给Scrapy下载并返回response
parse():解析response,并返回Item或requests(需指定回调函数)。Item传给Item pipline持久化,而request交由Scrapy下载,并由指定的回调函数(默认为parse())处理,一直进行循环,直到处理完所有的数据为止。
name:定义spider名字的字符串,例如:如果spider爬取web.com,该spider通常会被命名为web
allowed_domains:包含了spider允许爬取的域名(domain)列表,可选,一般会注释掉,因为会限制你的爬取范围
start_urls:初始URL列表,当没有制定特定的URL时,spider将从该列表中开始进行爬取
start_requests(self):。项目启动时会调用start_requests方法,如不定义则默认从start_urls列表中依次获取url生成request,然后调用回调方法parse。定义此方法后start_urls和parse()可删除。该方法必须返回一个可迭代对象(iterable),即Scrapy.Request()对象
parse(self,response):当请求url返回网页没有指定回调函数时,默认的request对象回调函数。用来处理网页返回的request,以及生成Item或者request对象
因为使用的yield,而不是return,因为我们爬取的数据量都比较偏大,如果使用return一次性返回的话会极大的消耗内存资源。parse函数将会被当成一个生成器,执行到yield时,会返回request对象或item。
Scrapy会逐一获取parse方法中生成的结果,并判断该结果是一个什么样的类型。如果是request则加入爬取队列,如果是item类型则使用pipeline处理,其他类型返回错误信息。
Scrapy取到第一部分的request不会立马就去发送这个request,只是把这个request放到队列里,然后接着从生成器里获取。
函数parse()作为回调函数(callback)赋值给了request,指定parse()方法处理这些请求,方式为:Scrapy.Request(url,callback=self.parse)
Resquest对象经过调度,执行生成的是一个Scrapy.Request(url,callback=self.parse)响应对象,并返回给parse()方法,直到调度器中没有Request(递归的思路)
取尽之后,parse()工作结束,引擎再根据队列和pipelines中的内容去执行相应的操作。
程序在取得各个页面的items前,会先处理完之前所有的request队列里的请求,然后再提取items.
安装scrapy
pip install scrapy
创建项目
scrapy startproject 项目名称
创建爬虫文件
scrapy genspider 爬虫名 爬虫名.com
运行项目,在项目目录下启动
scrapy crawl 爬虫名
仅对经常使用的配置项做一个注释
# -*- coding: utf-8 -*-
# Scrapy settings for baidu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
# 项目名称:在使用startproject时创建
BOT_NAME = 'baidu'
# 爬虫文件所在目录
SPIDER_MODULES = ['baidu.spiders']
# 创建爬虫文件的模板,创建好的爬虫文件会存在此目录下
NEWSPIDER_MODULE = 'baidu.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
# --------------------------------------------------------
# 设置UA
# USER_AGENT = 'baidu (+http://www.yourdomain.com)'
# ---------------------------------------------------------
# Obey robots.txt rules
# ---------------------------------------------------------
# 是否服从机器人(爬虫)协议,scrapy默认是服从,如果遵守的话就什么也爬不到,所以我们在开始项目时首先要将True改为False
ROBOTSTXT_OBEY = False
# ---------------------------------------------------------
# Configure maximum concurrent requests performed by Scrapy (default: 16)
# ----------------------------------------------------------
# 下载器处理的最大并发请求数量,即设置线程数,默认为16
# CONCURRENT_REQUESTS = 32
# ----------------------------------------------------------
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# ----------------------------------------------------------
# 下载延时,单位是秒,默认是0
# DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# -----------------------------------------------------------
# -----------------------------------------------------------
# Disable cookies (enabled by default)
# 是否携带cookie,默认是True,代表携带
#COOKIES_ENABLED = False
# -----------------------------------------------------------
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
# -----------------------------------------------------------
# 设置默认请求头
# DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
# }
# ------------------------------------------------------------
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
# -------------------------------------------------------------
# 是否启用爬虫中间件
# SPIDER_MIDDLEWARES = {
# 'baidu.middlewares.BaiduSpiderMiddleware': 543,
# }
# -------------------------------------------------------------
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# -------------------------------------------------------------
# 设置下载中间件
# DOWNLOADER_MIDDLEWARES = {
# 'baidu.middlewares.BaiduDownloaderMiddleware': 543,
# }
# -------------------------------------------------------------
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
# ------------------------------------------------------------
# 设置扩展功能
# EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
# }
# -------------------------------------------------------------
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
# -------------------------------------------------------------
# 激活管道文件
# 注意:如果需要对数据进行处理(使用pipelines.py文件),必须解开注释
# ITEM_PIPELINES = {
# 'baidu.pipelines.BaiduPipeline': 300,
# }
# --------------------------------------------------------------
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
爬取百度首页:将网页源代码保存为html格式至本地
进入要创建项目的文件夹,创建项目
scrapy startproject reptile
创建爬虫文件
scrapy genspider baidu baidu.com
修改setting.py
ROBOTSTXT_OBEY = False
编写spider文件夹下的baidu.py
import scrapy
class BaiduSpider(scrapy.Spider):
name = 'baidu'
# allowed_domains = ['baidu.com']
start_urls = ['https://www.baidu.com/']
def parse(self, response):
with open("baidu.html","w",encoding="utf8") as fp:
fp.write(response.text)
在命令窗口运行,scrapy crawl后的baidu是指baidu.py中定义的name的值
scrapy crawl baidu
爬取虎扑新闻:获取虎扑新闻中每一条新闻标题,来源,时间和内容
创建爬虫文件
scrapy genspider hupu hupu.com
编写spider文件下的hupu.py
对虎扑的首页面进行请求,获取文章的连接
import scrapy
class HupuSpider(scrapy.Spider):
name = 'hupu'
# allowed_domains = ['hupu.com']
start_urls = ["https://voice.hupu.com/"]
def parse(self, response):
# response对象:是scrapy自动给我们生成的,不需要请求
# 属性及方法:
# 1.text:和requests模块中的response.text一样,查询页面文本类型的内容
# 2.body:和requests模块中的response.content一样,查询页面二进制类型的内容
# 3.xpath:可以直接调用xpath方法,和之前的xpath语法一样
# 4.meta:接受从其它请求中传递过来的参数(数据),可以用来保持多个请求之间的数据传递
# 提取数据
# 获取新闻详情页的链接列表
url_list = response.xpath('//div[@class="news-list"]/ul/li')
# 循环获取每一个链接
for url in url_list:
# 返回的是selector对象即一个选择器对象,里面包含我们所所需要的数据
# 获取a标签的href值
url_addr = url.xpath('.//a/@href').get()
# scrapy提供了四种获取数据的方法:
# 1.extract() 获取所有Selector对象的data属性值,如果返回的是空列表,再取值时,会报错(超出索引范围)
# 2.extract_first() 获取列表中的第一个data值,返回的不是列表,是一个字符串,如果返回的是空列表,使用extract_first()返回None
# 3.getall() 和extract()一样
# 4.get() 和extract_first()一样
设置回调函数,对获取到的链接做进一步处理
import scrapy
class HupuSpider(scrapy.Spider):
name = 'hupu'
# allowed_domains = ['hupu.com']
start_urls = ["https://voice.hupu.com/"]
def parse(self, response):
url_list = response.xpath('//div[@class="news-list"]/ul/li')
for url in url_list:
url_addr = url.xpath('.//a/@href').get()
# -----------------------------------------------------------
# 发起二次请求
# Request对象:
# 1.url:请求的URL
# 2.callback:回调函数
# 3.meta:用于不同请求之间的数据传递,数据类型是字典类型
# 4.dont_filter:是否过滤重复请求,默认是False,代表过滤
# 5.method:请求方式,scrapy默认的请求方式是get
# 6.headers:设置请求头
yield scrapy.Request(url=url_addr,callback=self.get_content,meta={
"href":url_addr})
# -----------------------------------------------------------
编写回调函数
# 定义获取数据函数
def get_content(self,response):
# 提取数据
# 提取标题
title = response.xpath('//h1/text()').get().strip()
# 提取来源
source = response.xpath('//span[@id="source_baidu"]/a/text()').get()
# 提取时间
time = response.xpath('//span[@id="pubtime_baidu"]/text()').get()
# 提取内容
content = response.xpath('//p[@data-type="normal"]/text()').getall()
content = "".join(content)
将数据保存为xlsx文件
(1). 修改items.py文件
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class ReptileItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass
# 自定义Item类
class HuPuItem(scrapy.Item):
# 定义标题
title = scrapy.Field()
# 定义来源
source = scrapy.Field()
# 定义时间
time = scrapy.Field()
# 定义内容
content = scrapy.Field()
(2). 处理数据,在hupu.py中进行处理
import scrapy
# -------------------------------
# 从items.py中导入定义好的类
from ..items import HuPuItem
# -------------------------------
class HupuSpider(scrapy.Spider):
name = 'hupu'
# allowed_domains = ['hupu.com']
start_urls = ["https://voice.hupu.com/"]
def parse(self, response):
url_list = response.xpath('//div[@class="news-list"]/ul/li')
for url in url_list:
url_addr = url.xpath('.//a/@href').get()
yield scrapy.Request(url=url_addr,callback=self.get_content,meta={
"href":url_addr})
def get_content(self,response):
item = HuPuItem()
title = response.xpath('//h1/text()').get().strip()
source = response.xpath('//span[@id="source_baidu"]/a/text()').get()
time = response.xpath('//span[@id="pubtime_baidu"]/text()').get()
content = response.xpath('//p[@data-type="normal"]/text()').getall()
content = "".join(content)
# -----------------------------------------------------------
# 保存数据
item["title"] = title
item["source"] = source
item["time"] = time
item["content"] = content
# 注意:这里的key必须和item.py文件中定义的字段名一致
# 所以,我们通常保证三方一致(items.py文件中字段名和key和value)
yield item
# -----------------------------------------------------------
(3). 保存数据,在pipelines.py中进行保存
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
# class ReptilePipeline:
# def process_item(self, item, spider):
# return item
# -----------------------------------------------------------
# 保存数据至excel中
from openpyxl import Workbook
# 自定义管道类
class HuPuPipeline:
# open_spider()方法,在爬虫运行时会执行一次
def open_spider(self,spider):
# 实例化对象
self.wb = Workbook()
# 激活工作表
self.ws = self.wb.active
# 添加表头
self.ws.append(["标题","内容","来源","发布时间"])
# 用来处理item的函数
def process_item(self,item,spider):
self.ws.append([item["title"],item["content"],item["source"],item["time"]])
self.wb.save("hupu_news.xlsx")
return item
# close_spider()方法,在爬虫结束时会执行一次
def close_spider(self,spider):
self.wb.close()
# -----------------------------------------------------------
(4). 修改配置文件settings.py,激活管道
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
# 'reptile.pipelines.ReptilePipeline': 300,
# ----------------------------------------------------------
# 将我们定义好的类名放入即可
'reptile.pipelines.HuPuPipeline': 300,
# ----------------------------------------------------------
}
运行函数
在项目的目录下,即与scrapy.cfg同目录下,执行运行命令
scrapy crawl hupu