Scrapy是纯Python开发的一个高效,结构化的网页抓取框架;
Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的,也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。 Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试 Scrapy使用了Twisted 异步网络库来处理网络通讯。
使用原因:
1.为了更利于我们将精力集中在请求与解析上
2.企业级的要求
scrapy支持Python2.7和python3.4以上版本。
python包可以用全局安装(也称为系统范围),也可以安装在用户空间中。
一.直接安装
二.annaconda 下安装
安装conda
conda旧版本 https://docs.anaconda.com/anaconda/packages/oldpkglists/
安装方法 https://blog.csdn.net/ychgyyn/article/details/82119201
安装scrapy conda install scrapy
scrapy目前正在使用最新版的lxml,twisted和pyOpenSSL进行测试,并且与最近的Ubuntu发行版兼容。但它也支持旧版本的Ubuntu,比如Ubuntu14.04,尽管可能存在TLS连接问题。
Ubuntu安装注意事项
不要使用 python-scrapyUbuntu提供的软件包,它们通常太旧而且速度慢,无法赶上最新的Scrapy。
要在Ubuntu(或基于Ubuntu)系统上安装scrapy,您需要安装这些依赖项:
sudo apt-get install python-dev python-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev
如果你想在python3上安装scrapy,你还需要Python3的开发头文件:
sudo apt-get install python3-dev
在virtualenv中,你可以使用pip安装Scrapy:
pip install scrapy
(
spiders网页爬虫
items项目
engine引擎
scheduler调度器
downloader下载器
item pipelines项目管道
middleware中间设备,中间件
)
上图显示了Scrapy框架的体系结构及其组件,以及系统内部发生的数据流(由红色的箭头显示。)
Scrapy中的数据流由执行引擎控制,流程如下:
Scrapy Engine(引擎)
引擎负责控制系统所有组件之间的数据流,并在发生某些操作时触发事件。
scheduler(调度器)
调度程序接收来自引擎的请求,将它们排入队列,以便稍后引擎请求它们。
Downloader(下载器)
下载程序负责获取web页面并将它们提供给引擎,引擎再将它们提供给spider。
spider(爬虫)
爬虫是由用户编写的自定义的类,用于解析响应,从中提取数据,或其他要抓取的请求。
Item pipeline(管道)
管道负责在数据被爬虫提取后进行后续处理。典型的任务包括清理,验证和持久性(如将数据存储在数据库中)
下载中间件
下载中间件是位于引擎和下载器之间的特定的钩子,它们处理从引擎传递到下载器的请求,以及下载器传递到引擎的响应。
如果你要执行以下操作之一,请使用Downloader中间件:
在请求发送到下载程序之前处理请求(即在scrapy将请求发送到网站之前)
在响应发送给爬虫之前
直接发送新的请求,而不是将收到的响应传递给蜘蛛
将响应传递给爬行器而不获取web页面;
默默的放弃一些请求
爬虫中间件
爬虫中间件是位于引擎和爬虫之间的特定的钩子,能够处理传入的响应和传递出去的item和请求。
如果你需要以下操作请使用爬虫中间件:
处理爬虫回调之后的 请求或item
处理start_requests
处理爬虫异常
根据响应内容调用errback而不是回调请
1.创建项目:
scrapy startproject
ps: "<>“表示必填 ,”[]"表示可选
scrapy startproject db
2.cd 到项目下
scrapy genspider [options]
scrapy genspider example example.com
会创建在项目/spider下 ;其中example 是爬虫文件名, example.com 是 url
3.运行项目
scrapy crawl 爬虫文件名 #注重流程
scrapy crawl douban -o douban.csv
4.setting 里配置 ROBOTSTXT_OBEY;DEFAULT_REQUEST_HEADERS
scrapy shell url (start_url) 获取我们项目中的response
测试 xpath进行匹配
db项目外文件夹
项目创建一个Db250Spider的类,它必须继承scrapy.Spider类,需要定义一下三个属性:
name: spider的名字,必须且唯一
start_urls: 初始的url列表
parse(self, response) 方法:每个初始url完成之后被调用。这个函数要完成一下两个功能:
解析响应,封装成item对象并返回这个对象
提取新的需要下载的url,创建新的request,并返回它
命令如下
语法格式:scrapy genspider [-t template]
运行命令:scrapy genspider db250 movie.douban.com
会在spiders文件下生成db250.py文件,修改star_urls后
文件内容如左图:
import scrapy
class DuanziSpider(scrapy.Spider):
name = 'db' # 爬虫的名字,启动爬虫时需要一致,一般和文件名一致
allowed_domains = ['movie.douban.com'] # 允许的域名,一般不用注释
start_urls = ['https://movie.douban.com/top250/'] # 网页的起始名,一般是主页的URL
接下来,我们来完善这个爬虫,代码如下:
parse()会在请求完成时被调用。response是请求返回来的响应,可以通过xpath或者css方法很方便的解析。分析页面结构,提取课程需要的数据,生成一个字典,然后通过json模块,转成字符串,写入项目根目录下的film.txt 文件内。
运行爬虫
爬虫写好了,我们怎么启动爬虫,进行爬取呢?
首先进入项目根目录,然后运行命令:scrapy crawl db250就可以启动爬虫了。
追踪链接
上面的爬虫仅仅只爬取了一页,当然不符合我们的要求,我们需要爬取下一页,下一页,直到所有的信息都被下载。我们从页面中提取连接,或者根据规则构建。现在来看我们的爬虫修改为递归的爬取下一页的链接,从中提取数据。
我们创建了一个类变量page_num用来记录当前爬取到的页码,在parse函数中提取课程信息,然后通过爬虫对象给变量page__num自加1,构造下一页的url,然后创建scrapy.Request对象并返回。如果response中提取不到课程信息,我们判断已经到了最后一页,parse函数直接return结束。
定义item管道
到目前为止,我们通过scrapy写出的爬虫还看不出优越性在哪里,并且上面的爬虫还有个很严重的问题,就是对文件的操作。每次调用parse方法会打开文件关闭文件,这极大的浪费了资源。parse函数在解析出我们需要的信息之后,可以将这些信息打包成一个字典对象或scray.Item对象(一般都是item对象,下面我们再讲),然后返回。这个对象会被发送到item管道,该管道会通过顺序执行几个组件处理它。每个item管道组件是一个实现简单方法的Python类。他们收到一个item并对其执行操作,同时决定该item是否应该继续通过管道或者被丢弃并且不再处理。
item管道的典型用途是:
清理HTML数据
验证已删除的数据(检查项目是否包含某些字段)
检查重复项(并删除它们)
将已爬取的item进行数据持久化
我们先修改爬虫文件见左图:
定义item
抓取的主要目标是从非结构化数据源(通常是web页面)中提取结构化数据。Scrapy spider可以将提取的数据作为Python的dicts返回。虽然方便且熟悉,但Python的dicts缺乏结构:很容易在字段名中犯错误或返回不一致的数据,特别是在有许多爬虫的大型项目中。
若要定义公共输出数据格式,Scrapy提供了Item类。Item对象是用于收集剪贴数据的简单容器。它们提供了一个类似词典的API,提供了一种方便的语法来声明它们的可用字段。 scray.Item对象是用于收集抓取数据的简单容器,使用方法和python的字典类似。编辑项目目录下items.py文件。
然后我们只需要在爬虫中导入我们定义的Item类,实例化后用它进行数据结构化。
import scrapy
class DbItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
film_name = scrapy.Field()
director_name = scrapy.Field()
score = scrapy.Field()
豆瓣电影250个电影信息
电影信息为:电影名字,导演信息(可以包含演员信息),评分
将电影信息直接本地保存
将电影信息通过管道进行保存
# -*- coding: utf-8 -*-
import json
import scrapy
from ..items import DbItem #是一个安全的字典
class Db250Spider(scrapy.Spider):#继承基础类
name = 'db250' #爬虫文件名字 必须存在且唯一
# allowed_domains = ['movie.douban.com'] #允许的域名 可以不存在 不存在 任何域名都可以
start_urls = ['https://movie.douban.com/top250']#初始url 必须要存在
page_num=0
def parse(self, response):#解析函数 处理响应数据
node_list=response.xpath('//div[@class="info"]')
with open("film.txt","w",encoding="utf-8") as f:
for node in node_list:
#电影名字
film_name=node.xpath("./div/a/span/text()").extract()[0]
#导演信息
director_name=node.xpath("./div/p/text()").extract()[0].strip()
#评分
score=node.xpath('./div/div/span[@property="v:average"]/text()').extract()[0]
#非管道存储
item={}
item["item_pipe"]=film_name
item["director_name"]=director_name
item["score"]=score
content=json.dumps(item,ensure_ascii=False)
f.write(content+"\n")
#使用管道存储
item_pipe=DbItem() #创建Dbitem对象 当成字典来使用
item_pipe['film_name']=film_name
item_pipe['director_name']=director_name
item_pipe['score']=score
yield item_pipe
#发送新一页的请求
#构造url
self.page_num += 1
if self.page_num==3:
return
page_url="https://movie.douban.com/top250?start={}&filter=".format(self.page_num*25)
yield scrapy.Request(page_url)
#page页规律
"https://movie.douban.com/top250?start=25&filter="
"https://movie.douban.com/top250?start=50&filter="
"https://movie.douban.com/top250?start=75&filter="
import scrapy
class DbItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
film_name=scrapy.Field()
director_name=scrapy.Field()
score=scrapy.Field()
import json
class DbPipeline(object):
def open_spider(self,spider):
#爬虫文件开启,此方法执行
self.f=open("film_pipe.txt","w",encoding="utf-8")
def process_item(self, item, spider):
json_data=json.dumps(dict(item),ensure_ascii=False)+"\n"
self.f.write(json_data)
return item
def close_spider(self,spider):
# 爬虫文件关闭,此方法执行
self.f.close() #关闭文件
# -*- coding: utf-8 -*-
# Scrapy settings for db project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'db'
SPIDER_MODULES = ['db.spiders']
NEWSPIDER_MODULE = 'db.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'db (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'db.middlewares.DbSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'db.middlewares.DbDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'db.pipelines.DbPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
则更改为 ROBOTSTXT_OBEY = False
example.py
import scrapy
import json
from ..items import DbItem # 是一个安全的字典
class ExampleSpider(scrapy.Spider):
name = 'example'
allowed_domains = ['movie.douban.com'] # 限制域名站点URL
start_urls = ['https://movie.douban.com/top250/'] # 爬取URL
page_num = 0
def parse(self, response):
node_list = response.xpath('//div[@class="info"]')
with open("film.txt", "w", encoding="utf-8") as f:
for node in node_list:
# 电影名字
film_name = node.xpath("./div/a/span/text()").extract()[0]
# 导演信息
director_name = node.xpath("./div/p/text()").extract()[0].strip()
# 评分
score = node.xpath('./div/div/span[@property="v:average"]/text()').extract()[0]
# 非管道存储
item = {}
item["item_pipe"] = film_name
item["director_name"] = director_name
item["score"] = score
content = json.dumps(item, ensure_ascii=False)
f.write(content + "\n")
# 使用管道存储
item_pipe = DbItem() # 创建Dbitem对象 当成字典来使用
item_pipe['film_name'] = film_name
item_pipe['director_name'] = director_name
item_pipe['score'] = score
yield item_pipe
# 发送新一页的请求
# 构造url
self.page_num += 1
if self.page_num == 3:
return
page_url = "https://movie.douban.com/top250?start={}&filter=".format(self.page_num * 25)
yield scrapy.Request(page_url)
items.py
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class DbItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
film_name = scrapy.Field()
director_name = scrapy.Field()
score = scrapy.Field()
pipelines.py
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
import json
class DbPipeline:
def open_spider(self, spider):
# 爬虫文件开启,此方法执行
self.f = open("film_pipe.txt", "w", encoding="utf-8")
def process_item(self, item, spider):
json_data = json.dumps(dict(item), ensure_ascii=False) + "\n"
self.f.write(json_data)
return item
def close_spider(self, spider):
# 爬虫文件关闭,此方法执行
self.f.close() # 关闭文件
settings.py
ROBOTSTXT_OBEY = False
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
}
ITEM_PIPELINES = {
'db.pipelines.DbPipeline': 300,
}
1.详情页抓取(次级页面)的主要方法是get_detail 方法
def get_detail(self,response):
pass
2.参数的传递拼接 的关键参数是 meta参数
# -*- coding: utf-8 -*-
import json
import scrapy
from ..items import DbItem #是一个安全的字典
class Db250Spider(scrapy.Spider):#继承基础类
name = 'db250' #爬虫文件名字 必须存在且唯一
# allowed_domains = ['movie.douban.com'] #允许的域名 可以不存在 不存在 任何域名都可以
start_urls = ['https://movie.douban.com/top250']#初始url 必须要存在
page_num=0
def parse(self, response):#解析函数 处理响应数据
node_list=response.xpath('//div[@class="info"]')
for node in node_list:
#电影名字
film_name=node.xpath("./div/a/span/text()").extract()[0]
#导演信息
director_name=node.xpath("./div/p/text()").extract()[0].strip()
#评分
score=node.xpath('./div/div/span[@property="v:average"]/text()').extract()[0]
#使用管道存储
item_pipe=DbItem() #创建Dbitem对象 当成字典来使用
item_pipe['film_name']=film_name
item_pipe['director_name']=director_name
item_pipe['score']=score
# yield item_pipe
# print("电影信息",dict(item_pipe))
# 电影简介
detail_url = node.xpath('./div/a/@href').extract()[0]
yield scrapy.Request(detail_url,callback=self.get_detail,meta={"info":item_pipe})
#发送新一页的请求
#构造url
self.page_num += 1
if self.page_num==4:
return
page_url="https://movie.douban.com/top250?start={}&filter=".format(self.page_num*25)
yield scrapy.Request(page_url)
def get_detail(self,response):
item=DbItem()
#解析详情页的response
#1.meta 会跟随response 一块返回 2.通过response.meta接收 3.通过update 添加到新的item对象中
info = response.meta["info"]
item.update(info)
#简介内容
description=response.xpath('//div[@id="link-report"]//span[@property="v:summary"]/text()').extract()[0].strip()
# print('description',description)
item["description"]=description
#通过管道保存
yield item
#目标数据 电影信息+ 获取电影简介数据 次级页面的网页源代码里
#请求流程 访问一级页面 提取电影信息+次级页面的url 访问次级页面url 从次级的数据中提取电影简介
#存储的问题 数据没有次序 需要使用 meta传参 保证 同一电影的信息在一起
import scrapy
class DbItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
film_name=scrapy.Field()
director_name=scrapy.Field()
score=scrapy.Field()
description=scrapy.Field()
import json
class DbPipeline(object):
def open_spider(self,spider):
#爬虫文件开启,此方法执行
self.f=open("film_pipe.txt","w",encoding="utf-8")
def process_item(self, item, spider):
json_data=json.dumps(dict(item),ensure_ascii=False)+"\n"
self.f.write(json_data)
return item
def close_spider(self,spider):
# 爬虫文件关闭,此方法执行
self.f.close() #关闭文件
此处删除了大部分注释
# -*- coding: utf-8 -*-
# Scrapy settings for db project
BOT_NAME = 'db'
SPIDER_MODULES = ['db.spiders']
NEWSPIDER_MODULE = 'db.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'db (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'db.pipelines.DbPipeline': 300,
}
scrapy shell的作用是用于调试,
在项目 目录下输入scrapy shell https://movie.douban.com/top250 得到下列信息:
scrapy shell 会自动加载settings里的配置,即robots协议,请求头等都可以加载,从而发起请求可以得到正确的响应信息。
[s] Available Scrapy objects:
[s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc) #scrapy 模块
[s] crawler <scrapy.crawler.Crawler object at 0x000002624C415F98> #爬虫对象
[s] item {} #item对象
[s] request <GET https://movie.douban.com/top250> # 请求对象
[s] response <200 https://movie.douban.com/top250> #响应对象
[s] settings <scrapy.settings.Settings object at 0x000002624C415EB8> #配置文件
[s] spider <DefaultSpider 'default' at 0x2624c8ed3c8> #spider文件
[s] Useful shortcuts:
[s] fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed) #通过url 获取response
[s] fetch(req) Fetch a scrapy.Request and update local objects #通过请求对象 获取response
[s] shelp() Shell help (print this help) #列出命令
[s] view(response) View response in a browser #response 界面 本地浏览器环境下使用
Scrapy shell 本质上就是个普通的python shell
只不过提供了一些需要使用的对象,快捷方法便于我们调试。
启动 shell
启动Scrapy shell的命令语法格式如下:
scrapy shell [option] [url|file]
url 就是你想要爬取的网址
注意:分析本地文件是一定要带上路径,scrapy shell默认当作url
Scrapy选择器是通过scrapy.Selector类,通过传递文本或者TextResonse对象构造的实例。它会根据输入类型自动选择最佳解析规则XML与HTML它的构造方式如下
from scrapy.selector import Selector
from scrapy.http import HtmlResponse
# 从文本构造
boby = 'good '
select = Selector(text=boby)
# 从相应构造
response = HtmlResponse(url='http://www.example.com', body=boby, encoding='utf-8')
select1 = Selector(response=response)
# 为了方便,响应对象在select属性上公开选择器
# 在可能的情况下使用此快捷方式wnquankeyi
response.selector.xpath('//div')
print(isinstance(response.selector, Selector))
Scrapy提供基于lxml库的解析机制,它们被称为选择器。
因为,它们“选择”由XPath或CSS表达式指定的HTML文档的某部分。
Scarpy选择器的API非常小,且非常简单。
选择器提供2个方法来提取标签
xpath() 基于xpath的语法规则
css() 基于css选择器的语法规则
快捷方式
response.xpath()
response.css()
它们返回的选择器列表
提取文本:
selector.extract() 返回文本列表
selector.extract_first() 返回第一个selector的文本,没有返回None
嵌套选择器
有时候我们获取标签需要多次调用选择方法(.xpath()或.css())
response.css(‘img’).xpath(’@src’)
Selector还有一个.re()方法使用正则表达式提取数据的方法。
它返回字符串。
它一般使用在xpath(),css()方法之后,用来过滤文本数据。
re_first()用来返回第一个匹配的字符串。
html_str="""
导演: 弗兰克·德拉邦特 Frank Darabont 主演: 蒂姆·罗宾斯 Tim Robbins /...
1994 / 美国 / 犯罪 剧情
1980500人评价
希望让人自由。
首先生成初始请求以爬取第一个URL,并指定要使用从这些请求下载的响应调用的回调函数。
在回调函数中,解析响应(网页)并返回,Item对象, Request对象或这些对象的可迭代的dicts。
在回调函数中,通常使用选择器解析页面内容 (但您也可以使用BeautifulSoup,lxml或您喜欢的任何机制)并使用解析的数据生成item。
最后,从蜘蛛返回的项目通常会持久保存到数据库(在某些项目管道中)或使用Feed导出写入文件。
spider 的名称 name
一个字符串,用于定义此蜘蛛的名称。蜘蛛名称是Scrapy如何定位(并实例化)蜘蛛,因此它必须是唯一的。这是最重要的蜘蛛属性,它是必需的。
起始urls
蜘蛛将开始爬取的URL列表。因此,下载的第一页将是此处列出的页面。后续Request将从起始URL中包含的数据连续生成。
自定义设置
运行此蜘蛛时将覆盖项目范围的设置。必须将其定义为类属性,因为在实例化之前更新了设置。
class Spider(object_ref):
"""Base class for scrapy spiders. All spiders must inherit from this
class.
"""
name = None
custom_settings = None
def __init__(self, name=None, **kwargs):
if name is not None:
self.name = name
elif not getattr(self, 'name', None):
raise ValueError("%s must have a name" % type(self).__name__)
self.__dict__.update(kwargs)
if not hasattr(self, 'start_urls'):
self.start_urls = []
使用Spider创建的Python日志器。您可以使用它来发送日志消息。
@property
def logger(self):
logger = logging.getLogger(self.name)
return logging.LoggerAdapter(logger, {'spider': self})
def log(self, message, level=logging.DEBUG, **kw):
"""Log the given message at the given log level
This helper wraps a log call to the logger within the spider, but you
can use it directly (e.g. Spider.logger.info('msg')) or use any other
Python logger too.
"""
self.logger.log(level, message, **kw)
from_crawler
这是Scrapy用于创建spider的类方法。一般不用覆盖。
@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
spider = cls(*args, **kwargs)
spider._set_crawler(crawler)
return spider
def _set_crawler(self, crawler):
self.crawler = crawler
self.settings = crawler.settings
crawler.signals.connect(self.close, signals.spider_closed)
start_requests() 开始请求
此方法必须返回一个iterable,其中包含第一个要爬网的请求。它只会被调用一次
def start_requests(self):
cls = self.__class__
if not self.start_urls and hasattr(self, 'start_url'):
raise AttributeError(
"Crawling could not start: 'start_urls' not found "
"or empty (but found 'start_url' attribute instead, "
"did you miss an 's'?)")
if method_is_overridden(cls, Spider, 'make_requests_from_url'):
warnings.warn(
"Spider.make_requests_from_url method is deprecated; it "
"won't be called in future Scrapy releases. Please "
"override Spider.start_requests method instead (see %s.%s)." % (
cls.__module__, cls.__name__
),
)
for url in self.start_urls:
yield self.make_requests_from_url(url)
else:
for url in self.start_urls:
yield Request(url, dont_filter=True)
parse 默认回调函数方法
这是Scrapy在其请求未指定回调时处理下载的响应时使用的默认回调
def parse(self, response):
raise NotImplementedError('{}.parse callback is not defined'.format(self.__class__.__name__))
@staticmethod
def close(spider, reason):
closed = getattr(spider, 'closer', None)
if callable(closed):
return closed(reason)
def __str__(self):
return "<%s %r at 0x%0x>" %(type(self).__name__, self.name, id(self))
__repr__ = __str__
spider关闭时调用
创建CrawlSpider 的爬虫文件
命令:
scrapy genspider -t crawl zongheng xxx.com
注意:分析本地文件是一定要带上路径,scrapy shell默认当作url
Rule
from scrapy.spiders import CrawlSpider, Rule
功能:Rule用来定义CrawlSpider的爬取规则
from scrapy.spiders import CrawlSpider, Rule
参数:
(self, link_extractor=None, callback=None, cb_kwargs=None, follow=None,
process_links=None, process_request=None, errback=None):
link_extractor: Link Extractor对象,它定义如何从每个已爬网页面中提取链接。
callback :回调函数 处理link_extractor形成的response
cb_kwargs : cb:callback 回调函数的参数,是一个包含要传递给回调函数的关键字参数的dict
follow :它指定是否应该从使用此规则提取的每个响应中跟踪链接。
两个值:True False;follow=True link_extractor形成的response 会交给rule;False 则不会;
process_links : 用于过滤链接的回调函数 , 处理link_extractor提取到的链接
process_request : 用于过滤请求的回调函数
**errback:**处理异常的函数
LinkExractor
from scrapy.linkextractors import LinkExtractor
LinkExractor也是scrapy框架定义的一个类
它唯一的目的是从web页面中提取最终将被跟踪的额连接。
我们也可定义我们自己的链接提取器,只需要提供一个名为extract_links的方法,它接收Response对象
并返回scrapy.link.Link对象列表。
def __init__(self, allow=(), deny=(), allow_domains=(), deny_domains=(), restrict_xpaths=(),tags=('a', 'area'), attrs=('href',), canonicalize=False,unique=True, process_value=None, deny_extensions=None, restrict_css=(), strip=True, restrict_text=None):
allow(允许): 正则表达式,或其列表 匹配url 为空 则匹配所有url
deny(不允许):正则表达式,或其列表 排除url 为空 则不排除url
allow_domains(允许的域名):str,或其列表
deny_domains(不允许的域名);str,或其列表
restrict_xpaths(通过xpath 限制匹配区域): xpath表达式 或列表
restrict_css(通过css 限制匹配区域): css表达式
restrict_text(通过text 限制匹配区域): 正则表达式
tags=(‘a’, ‘area’):允许的标签
attrs=(‘href’,):允许的属性
canonicalize:规范化每个提取的url
unique(唯一):将匹配到的重复链接过滤
process_value:接收 从标签提取的每个值 函数
deny_extensions :不允许拓展 提取链接的时候,忽略一些扩展名.jpg .xxx
二级页面 小说详情
三级页面 章节目录
四级页面 章节内容
category、book_name、author、status、book_nums、description、c_time、book_url、catalog_url、
title、content、ordernum、c_time、chapter_url、catalog_url、
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Xle5PUdU-1612766693875)(/Users/lijundong/Downloads/爬虫基础课改PPT+教案/7.Scrapy框架(三)].assets/crawlspider流程-1596131751356.png)
根据目标数据–要存储的数据,在rules中定义Rule规则,按需配置callback函数,解析response获得想要的数据。
# -*- coding: utf-8 -*-
import datetime
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from ..items import NovelItem,ChapterItem,ContentItem
class ZhSpider(CrawlSpider):
name = 'zh'
allowed_domains = ['book.zongheng.com']
start_urls = ['http://book.zongheng.com/store/c0/c0/b0/u1/p1/v0/s1/t0/u0/i1/ALL.html']#起始的url
#定义爬取规则 1.提取url(LinkExtractor对象) 2.形成请求 3.响应的处理规则
rules = (
Rule(LinkExtractor(allow=r'http://book.zongheng.com/book/\d+.html',restrict_xpaths='//div[@class="bookname"]'),
callback='parse_book', follow=True,process_links="process_booklink"),
Rule(LinkExtractor(allow=r'http://book.zongheng.com/showchapter/\d+.html'), callback='parse_catalog', follow=True,),
Rule(LinkExtractor(allow=r'http://book.zongheng.com/chapter/\d+/\d+.html',restrict_xpaths='//ul[@class="chapter-list clearfix"]'),
callback='get_content', follow=False,process_links="process_chpter"),
)
def process_booklink(self,links):
#处理 LinkExtractor 提取到的url
for index,link in enumerate(links):
if index==0:
print(index,link.url)
yield link
else:
return
def process_chpter(self,links):
for index,link in enumerate(links):
if index<=20:
yield link
else:
return
def parse_book(self, response):
category = response.xpath('//div[@class="book-label"]/a/text()').extract()[1]
book_name = response.xpath('//div[@class="book-name"]/text()').extract()[0].strip()
author = response.xpath('//div[@class="au-name"]/a/text()').extract()[0]
status = response.xpath('//div[@class="book-label"]/a/text()').extract()[0]
book_nums = response.xpath('//div[@class="nums"]/span/i/text()').extract()[0]
description = ''.join(response.xpath('//div[@class="book-dec Jbook-dec hide"]/p/text()').re("\S+"))
c_time = datetime.datetime.now()
book_url = response.url
catalog_url = response.css("a").re('http://book.zongheng.com/showchapter/\d+.html')[0]
item = NovelItem()
item["category"] = category
item["book_name"] = book_name
item["author"] = author
item["status"] = status
item["book_nums"] = book_nums
item["description"] = description
item["c_time"] = c_time
item["book_url"] = book_url
item["catalog_url"] = catalog_url
yield item
def parse_catalog(self,response):
a_tags = response.xpath('//ul[@class="chapter-list clearfix"]/li/a')
chapter_list = []
catalog_url = response.url
for a in a_tags:
print("解析catalog_url")
title = a.xpath("./text()").extract()[0]
chapter_url = a.xpath("./@href").extract()[0]
chapter_list.append((title, chapter_url, catalog_url))
item = ChapterItem()
item["chapter_list"] = chapter_list
yield item
def get_content(self,response):
chapter_url = response.url
content = ''.join(response.xpath('//div[@class="content"]/p/text()').extract())
c_time = datetime.datetime.now()
# 向管道传递数据
item = ContentItem()
item["chapter_url"] = chapter_url
item["content"] = content
yield item
items文件里的字段,是根据目标数据的需求,
import scrapy
class ZonghengItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass
class NovelItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
category = scrapy.Field()
book_name = scrapy.Field()
author = scrapy.Field()
status = scrapy.Field()
book_nums = scrapy.Field()
description = scrapy.Field()
c_time = scrapy.Field()
book_url = scrapy.Field()
catalog_url = scrapy.Field()
class ChapterItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
chapter_list = scrapy.Field()
catalog_url = scrapy.Field()
class ContentItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
content = scrapy.Field()
chapter_url = scrapy.Field()
此处是将数据写入数据库,
# -*- coding: utf-8 -*-
import pymysql
from zongheng.items import NovelItem,ChapterItem,ContentItem
import datetime
from scrapy.exceptions import DropItem
class ZonghengPipeline(object):
#连接数据库
def open_spider(self,spider):
data_config = spider.settings["DATABASE_CONFIG"]
print("数据库内容",data_config)
if data_config["type"] == "mysql":
self.conn = pymysql.connect(**data_config["config"])
self.cursor = self.conn.cursor()
spider.conn = self.conn
spider.cursor = self.cursor
#数据存储
def process_item(self, item, spider):
#1.小说信息存储
if isinstance(item,NovelItem):
sql="select id from novel where book_name=%s and author=%s"
self.cursor.execute(sql,(item["book_name"],item["author"]))
if not self.cursor.fetchone():
#写入小说数据
sql="insert into novel(category,book_name,author,status,book_nums,description,c_time,book_url,catalog_url)" \
"values (%s,%s,%s,%s,%s,%s,%s,%s,%s)"
self.cursor.execute(sql,(
item["category"],
item["book_name"],
item["author"],
item["status"],
item["book_nums"],
item["description"],
item["c_time"],
item["book_url"],
item["catalog_url"],
))
self.conn.commit()
return item
#2.章节信息存储
elif isinstance(item,ChapterItem):
#写入 目录信息
sql = "insert into chapter(title,ordernum,c_time,chapter_url,catalog_url) values(%s,%s,%s,%s,%s)"
data_list=[]
for index,chapter in enumerate(item["chapter_list"]):
c_time = datetime.datetime.now()
ordernum=index+1
title,chapter_url,catalog_url=chapter
data_list.append((title,ordernum,c_time,chapter_url,catalog_url))
self.cursor.executemany(sql,data_list)
self.conn.commit()
return item
#3.章节内容存储
elif isinstance(item, ContentItem):
sql="update chapter set content=%s where chapter_url=%s"
content=item["content"]
chapter_url=item["chapter_url"]
self.cursor.execute(sql,(content,chapter_url))
self.conn.commit()
return item
else:
return DropItem
#关闭数据库
def close_spider(self,spider):
data_config=spider.settings["DATABASE_CONFIG"]#settings里设置数据库
if data_config["type"]=="mysql":
self.cursor.close()
self.conn.close()
# -*- coding: utf-8 -*-
BOT_NAME = 'zongheng'
SPIDER_MODULES = ['zongheng.spiders']
NEWSPIDER_MODULE = 'zongheng.spiders'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 1
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'zongheng.pipelines.ZonghengPipeline': 300,
}
DATABASE_CONFIG={
"type":"mysql",
"config":{
"host":"localhost",
"port":3306,
"user":"root",
"password":"123789",
"db":"zhnovel",
"charset":"utf8"
}
}
LOG_FORMAT='%(asctime)s [%(name)s] %(levelname)s: %(message)s'
LOG_DATEFORMAT='%Y'
这里对链接做了条件判定,通过集合seen去重
def _requests_to_follow(self, response):
if not isinstance(response, HtmlResponse):
return
seen = set()
for rule_index, rule in enumerate(self._rules):
links = [lnk for lnk in rule.link_extractor.extract_links(response)
if lnk not in seen]
for link in rule.process_links(links):
seen.add(link)
request = self._build_request(rule_index, link)
yield rule._process_request(request, response)
Scrapy.http.Request类是scrapy框架中request的基类。它的参数如下:
url(字符串) - 此请求的URL
callback(callable)- 回调函数
method(string) - 此请求的HTTP方法。默认为’GET’。
meta(dict) - Request.meta属性的初始值。
body(str 或unicode) - 请求体。如果没有传参,默认为空字符串。
headers(dict) - 此请求的请求头。
cookies - 请求cookie。
encoding(字符串) - 此请求的编码(默认为’utf-8’)此编码将用于对URL进行百分比编码并将body抓换str(如果给定unicode)。
priority(int) - 此请求的优先级(默认为0),数字越大优先级越高。
dont_filter(boolean) - 表示调度程序不应过滤此请求。
errback(callable) - 在处理请求时引发任何异常时将调用的函数。
flags(list) - 发送给请求的标志,可用于日志记录或类似目的
from scrapy.http import Request,FormRequest
"""
class Request(object_ref):
( url, callback=None, method='GET', headers=None, body=None,
cookies=None, meta=None, encoding='utf-8', priority=0, #值越大 ,优先级越高
dont_filter=False, errback=None, flags=None, cb_kwargs=None):
"""
req=Request("http://www.baidu.com",headers={"spider":666},meta={"name":"爬虫"})
#功能构造请求
#参数
#请求对象
print(req.url)
print(req.method)
print(req.headers)
print(req.meta)
rer=req.replace(url="https://www.baidu.com")
print(rer.url)
get请求和post请求是最常见的请求。scrapy框架内置了一个FormRequest类
它扩展了基类Request,具有处理HTML表单的功能。
在使用scrapy发动POST请求的时候,常使用此方法,能较方便的发送请求.具体的使用,见登录github案例;
class FormRequest(Request):
valid_form_methods = ['GET', 'POST']
def __init__(self, *args, **kwargs):
formdata = kwargs.pop('formdata', None)
if formdata and kwargs.get('method') is None:
kwargs['method'] = 'POST'
super(FormRequest, self).__init__(*args, **kwargs)
if formdata:
items = formdata.items() if isinstance(formdata, dict) else formdata
querystr = _urlencode(items, self.encoding)
if self.method == 'POST':
self.headers.setdefault(b'Content-Type', b'application/x-www-form-urlencoded')
self._set_body(querystr)
else:
self._set_url(self.url + ('&' if '?' in self.url else '?') + querystr)
url(字符串) - 此响应的URL
status(整数) - 响应的HTTP状态。默认为200。
headers(dict) - 此响应的响应头。dict值可以是字符串(对于单值标头)或列表(对于多值标头)。
body(字节) - 响应主体。要将解码后的文本作为str(Python 2中的unicode)访问,您可以使用response.text 来自编码感知的 Response子类,例如TextResponse。
flags(列表) - 是包含Response.flags属性初始值的列表 。如果给定,列表将被浅层复制。
request(Requestobject) - Response.request属性的初始值。这表示Request生成此响应的内容。
属性和方法
url 包含此请求的URL的字符串。该属性是只读的。更改请求使用的URL replace()。
method 表示请求中的HTTP方法的字符串。
headers 类似字典的对象,包含请求头。
body 包含请求正文的str。该属性是只读的。更改请求使用的URL replace()。
meta 包含此请求的任意元数据的字典。
copy() 返回一个新的请求,改请求是此请求的副本。
replace([ URL,method,headers,body,cookies,meta,encoding,dont_filter,callback,errback] ) 返回一个更新对的request
from scrapy.http.response import Response,text
"""
class Response()
功能:构造response 对象 参数 返回值:response 对象
url, status=200, headers=None, body=b'', flags=None, request=None, certificate=None)
"""
res=Response("http://www.baidu.com",request=req)
print("url",res.url)
print("状态码",res.status)
print("响应头")
print("响应内容",res.body)
print("请求",res.request)
print("meta",res.meta)
Scrapy logger 在每个Spider实例中提供了一个可以访问和使用的实例
import scrapy
class BdSpider(scrapy.Spider):
name = 'bd'
allowed_domains = ['www.baidu.com']
start_urls = ['http://www.baidu.com/']
def parse(self, response):
self.logger.warning("可能会有错误")
print(response)
自定义
import logging
import scrapy
logger = logging.getLogger('mycustomlogger')
class BdSpider(scrapy.Spider):
name = 'bd'
allowed_domains = ['www.baidu.com']
start_urls = ['http://www.baidu.com/']
def parse(self, response):
logger.info('Parse function called on -%s', response.url)
self.logger.warning("可能会有错误")
print(response)
当然可以通过python的logging来记录。比如:logging.warning(‘This is a warning!’)但是为了后期维护方面,我们可以创建不同的记录器来封装消息.
import logging
logger = logging.getLogger()
logger.warning('This is a warning')
# 您只需要使用logging.getLogger函数获取其名称即可使用其他记录器:
logger = logging.getLogger('mycustomlogger')
logger.warning('This is a warning')
# 最后,您可以使用__name__变量填充当前模块路径,确保为您正在处理的任何模块设置自定义记录器:
logger = logging.getLogger(__name__)
logger.warning('This is a warning')
LOG_FILE 日志输出文件,如果为None,就打印在控制台
LOG_ENABLED 是否启用日志,默认True
LOG_ENCODING 日期编码,默认utf-8
LOG_LEVEL 日志等级,默认debug
LOG_FORMAT 日志格式
LOG_DATEFORMAT 日志日期格式
LOG_STDOUT 日志标准输出,默认False,如果True所有标准输出都将写入日志中
LOG_SHORT_NAMES 短日志名,默认为False,如果True将不输出组件名
项目中一般设置:
LOG_FILE = ‘logfile_name’
LOG_LEVEL = ‘INFO’
LOG_FILE = 'log.log' # 日志输出文件,如果为None,就打印在控制台
LOG_ENABLED = True # 是否启用日志,默认True
LOG_ENCODING = 'utf-8' # 日期编码,默认utf-8
LOG_LEVEL = 'INFO' # 日志等级,默认debug
LOG_FORMAT = '%(asctime)s [%(name)s] %(levelname)s: %s(message)s' # 日志格式
LOG_DATEFORMAT = '%Y-%m-%d %H:%M:%S' # 日志日期格式
LOG_STDOUT = False # 日志标准输出,默认False,如果True所有标准输出都将写入日志中
LOG_SHORT_NAMES = False # 短日志名,默认为False,如果True将不输出组件名
# 项目中一般设置:
LOG_FILE = 'logfile_name'
LOG_LEVEL = 'INFO'
登录需要向 https://github.com/session 网址提交用户名和密码,
除此之外,还有其他的参数:
form_data={
"commit": "Sign in",
"authenticity_token":authenticity_token ,
"ga_id": "285586600.1585549705",
"login": "[email protected]",#用户名
"password":P,#密码
"webauthn-support": "supported",
"webauthn-iuvpaa-support": "unsupported",
"return_to": "",
required_field: "",
"timestamp": timestamp,
"timestamp_secret": timestamp_secret,
}
之前访问的页面中
JS动态生成
经过分析检验,参数来自之前访问的静态页面中
import scrapy
from github.spiders.password.pw import P
class LoginSpider(scrapy.Spider):
name = 'login'
allowed_domains = ['github.com']
start_urls = ['http://github.com/login']#首先访问登录页面
def parse(self, response):
authenticity_token=response.xpath('//input[@name="authenticity_token"]/@value').extract()[0]
timestamp=response.xpath('//input[@name="timestamp"]/@value').extract()[0]
timestamp_secret=response.xpath('//input[@name="timestamp_secret"]/@value').extract()[0]
required_field=response.xpath('//input[@type="text"]/@name').extract()[1]
form_data={
"commit": "Sign in",
"authenticity_token":authenticity_token ,
"ga_id": "285586600.1585549705",
"login": "[email protected]",
"password":P,
"webauthn-support": "supported",
"webauthn-iuvpaa-support": "unsupported",
"return_to": "",
required_field: "",
"timestamp": timestamp,
"timestamp_secret": timestamp_secret,
}
yield scrapy.FormRequest(url="https://github.com/session",callback=self.verify_login,
formdata=form_data)
def verify_login(self,response):
if "youthsnow" in response.text:
print("登录成功")
else:
print("不成功")
没有特殊设置,可忽略
# -*- coding: utf-8 -*-
BOT_NAME = 'github'
SPIDER_MODULES = ['github.spiders']
NEWSPIDER_MODULE = 'github.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'github (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36",
}
LOG_FORMAT='%(asctime)s [%(name)s] %(levelname)s: %(message)s'
LOG_DATEFORMAT='%Y'
scrapy系统自带的中间件被放在DOWNLOADER_MIDDLEWARES_BASE设置中
可以通过命令 scrapy setttings --get=DOWNLOADER_MIDDLEWARES_BASE
查看
用户自定义的中间件需要在DOWNLOADER_MIDDLEWARES中进行设置
该设置是一个dict,键是中间件类路径,期值是中间件的顺序,是一个正整数0-1000.越小越靠近引擎。
例:“scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware”: 100, 数字越小,优先级越高 即越靠近引擎
path–>scrapy/settings/default_settings.py
DOWNLOADER_MIDDLEWARES_BASE = {
# Engine side reptile/lib/python3.8/site-packages/scrapy/settings/default_settings.py
"scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware": 100, # 机器人协议
"scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware": 300, # http身份验证中间件
"scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware": 350, # 下载超时中间件
"scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware": 400, # 默认请求头中间件
"scrapy.downloadermiddlewares.useragent.UserAgentMiddleware": 500, # 用户代理中间件
"scrapy.downloadermiddlewares.retry.RetryMiddleware": 550, # 重新尝试中间件
"scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware": 560, # 基于元片段html标签找到“ AJAX可抓取”页面变体的中间件。
"scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware": 580, # 而MetaRefreshMiddleware 始终使用字符串作为原因
"scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware": 590, # 该中间件允许从网站发送/接收压缩(gzip,deflate)流量。
"scrapy.downloadermiddlewares.redirect.RedirectMiddleware": 600, # 该中间件根据响应状态处理请求的重定向。
"scrapy.downloadermiddlewares.cookies.CookiesMiddleware": 700, # cookie中间件
"scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware": 750, # IP代理中间件
"scrapy.downloadermiddlewares.stats.DownloaderStats": 850, # 用于存储通过它的所有请求,响应和异常的统计信息
"scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware": 900 # 缓存中间件
# Downloader side
}
#常用内置中间件:
#CookieMiddleware 支持cookie,通过设置COOKIES_ENABLED 来开启和关闭
#HttpProxyMiddleware HTTP代理,通过设置request.meta['proxy']的值来设置
#UserAgentMiddleware 与用户代理中间件。
#其他中间件见官方文档:https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
详见官方文档
https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
每个中间件都是Python的一个类,它定义了以下一个或多个方法:
process_request(request,spider) 处理请求,对于通过中间件的每个请求调用此方法
process_response(request, response, spider) 处理响应,对于通过中间件的每个响应,调用此方法
process_exception(request, exception, spider) 处理请求时发生了异常调用
from_crawler(cls,crawler ) 创建爬虫
这里需要提醒的是,要注意每个方法的返回值,返回值内容的不同,决定着请求或响应进一步在哪里执行
class BaiduDownloaderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_request(self, request, spider):
# Called for each request that goes through the downloader
#处理请求 参数 request spider对象
# middleware.
# Must either: 以下必选其一
# - return None: continue processing this request #返回None request 被继续交个下一个中间件处理
# - or return a Response object #返回response对象 不会交给下一个precess_request 而是交给下载器
# - or return a Request object #返回一个request对象 直接交给引擎处理
# - or raise IgnoreRequest: process_exception() methods of #抛出异常 process_exception处理
# installed downloader middleware will be called
return None
def process_response(self, request, response, spider):
# Called with the response returned from the downloader.
#处理响应 request, response, spider
# Must either;
# - return a Response object #继续交给下一中间件处理
# - return a Request object #返回一个request对象 直接交给引擎处理
# - or raise IgnoreRequest #抛出异常 process_exception处理
return response
def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception.
#处理异常
# Must either:
# - return None: continue processing this exception #继续调用(下一个)其他中间件的process_exception
# - return a Response object: stops process_exception() chain #返回response 停止调用其他中间件的process_exception
# - return a Request object: stops process_exception() chain #返回request 直接交给引擎处理
pass
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
#user_agent
user_agent_list = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
"(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
"(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "
"(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "
"(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "
"(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "
"(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "
"(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
"(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
"(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
]
from .settings import user_agent_list
class User_AgentDownloaderMiddleware(object):
def process_request(self, request, spider):
request.headers["User_Agent"]=random.choice(user_agent_list) #随机选择一个UA
# Called for each request that goes through the downloader
#处理请求 参数 request spider对象
# middleware.
# Must either: 以下必选其一
# - return None: continue processing this request #返回None request 被继续交个下一个中间件处理
# - or return a Response object #返回response对象 不会交给下一个precess_request 而是交给下载器
# - or return a Request object #返回一个request对象 直接交给引擎处理
# - or raise IgnoreRequest: process_exception() methods of #抛出异常 process_exception处理
# installed downloader middleware will be called
return None
(此处是示例,以上的代理基本无法使用的,同时不建议大家去找免费的代理,会不安全)
#IP代理池
IPPOOL=[
{"ipaddr":"61.129.70.131:8080"},
{"ipaddr":"61.152.81.193:9100"},
{"ipaddr":"120.204.85.29:3128"},
{"ipaddr":"219.228.126.86:8123"},
{"ipaddr":"61.152.81.193:9100"},
{"ipaddr":"218.82.33.225:53853"},
{"ipaddr":"223.167.190.17:42789"}
]
from .settings import IPPOOL
class MyproxyDownloaderMiddleware(object):
#目的 设置多个代理
#通过meta 设置代理
def process_request(self, request, spider):
proxyip=random.choice(IPPOOL)
request.meta["proxy"]="http://"+proxyip["ipaddr"]#http://61.129.70.131:8080
# Must either: 以下必选其一
# - return None: continue processing this request #返回None request 被继续交个下一个中间件处理
# - or return a Response object #返回response对象 不会交给下一个precess_request 而是交给下载器
# - or return a Request object #返回一个request对象 直接交给引擎处理
# - or raise IgnoreRequest: process_exception() methods of #抛出异常 process_exception处理
# installed downloader middleware will be called
return None
BOT_NAME = ‘baidu’ #scrapy 项目名字
SPIDER_MODULES = [‘baidu.spiders’]#爬虫模块
NEWSPIDER_MODULE = ‘baidu.spiders’#使用genspider 命令创建的爬虫模块
项目名称,默认的USER_AGENT由它来构成,也作为日志记录的日志名
#BOT_NAME = ‘baidu’
爬虫应用路径
#SPIDER_MODULES = [‘Amazon.spiders’]
#NEWSPIDER_MODULE = ‘Amazon.spiders’
客户端User-Agent请求头
USER_AGENT = ’ ’
是否遵循爬虫协议
#Obey robots.txt rules
ROBOTSTXT_OBEY = False
是否支持cookie,cookiejar进行操作cookie,默认开启
#Disable cookies (enabled by default)
#COOKIES_ENABLED = False
Telnet用于查看当前爬虫的信息,操作爬虫等…使用telnet ip port ,然后通过命令操作
#TELNETCONSOLE_ENABLED = False
#TELNETCONSOLE_HOST = ‘127.0.0.1’
#TELNETCONSOLE_PORT = [6023,]
Scrapy发送HTTP请求默认使用的请求头
#DEFAULT_REQUEST_HEADERS = {
#‘Accept’: ‘text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8’,
‘Accept-Language’: ‘en’,
}
请求失败后(retry)
scrapy自带scrapy.downloadermiddlewares.retry.RetryMiddleware中间件,如果想通过重试次数,可以进行如下操作:
参数配置:
#RETRY_ENABLED: 是否开启retry
#RETRY_TIMES: 重试次数
#RETRY_HTTP_CODECS: 遇到什么http code时需要重试,默认是500,502,503,504,408,其他的,网络连接超时等问题也会自动retry的
下载器总共最大处理的并发请求数,默认值16
#CONCURRENT_REQUESTS = 32
每个域名能够被执行的最大并发请求数目,默认值8
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
能够被单个IP处理的并发请求数,默认值0,代表无限制,需要注意两点
#I、如果不为零,那CONCURRENT_REQUESTS_PER_DOMAIN将被忽略,即并发数的限制是按照每个IP来计算,而不是每个域名
#II、该设置也影响DOWNLOAD_DELAY,如果该值不为零,那么DOWNLOAD_DELAY下载延迟是限制每个IP而不是每个域 #CONCURRENT_REQUESTS_PER_IP = 16
如果没有开启智能限速,这个值就代表一个规定死的值,代表对同一网址延迟请求的秒数
#DOWNLOAD_DELAY = 3
介绍
from scrapy.contrib.throttle import AutoThrottle
http://scrapy.readthedocs.io/en/latest/topics/autothrottle.html
#topics-autothrottle
设置目标:
1.比使用默认的下载延迟对站点更好
2.自动调整scrapy到最佳的爬取速度,所以用户无需自己调整下载延迟到最佳状态。用户只需要定义允许最大并发的请求,剩下的事情由该扩展组件自动完成
如何实现?
在Scrapy中,下载延迟是通过计算建立TCP连接到接收到HTTP包头(header)之间的时间来测量的。 注意,由于Scrapy可能在忙着处理spider的回调函数或者无法下载,因此在合作的多任务环境下准确测量这些延迟是十分苦难的。 不过,这些延迟仍然是对Scrapy(甚至是服务器)繁忙程度的合理测量,而这扩展就是以此为前提进行编写的。
限速算法
自动限速算法基于以下规则调整下载延迟
spiders开始时的下载延迟是基于AUTOTHROTTLE_START_DELAY的值
当收到一个response,对目标站点的下载延迟=收到响应的延迟时间/AUTOTHROTTLE_TARGET_CONCURRENCY
下一次请求的下载延迟就被设置成:对目标站点下载延迟时间和过去的下载延迟时间的平均值
没有达到200个response则不允许降低延迟
下载延迟不能变的比DOWNLOAD_DELAY更低或者比AUTOTHROTTLE_MAX_DELAY更高
配置使用
开启True,默认False
#AUTOTHROTTLE_ENABLED = True
起始的延迟
#AUTOTHROTTLE_START_DELAY = 5
最小延迟
#DOWNLOAD_DELAY = 3
最大延迟
#AUTOTHROTTLE_MAX_DELAY = 10
每秒并发请求数的平均值,不能高于 CONCURRENT_REQUESTS_PER_DOMAIN或CONCURRENT_REQUESTS_PER_IP,调高了则吞吐量增大强奸目标站点,调低了则对目标站点更加”礼貌“
每个特定的时间点,scrapy并发请求的数目都可能高于或低于该值,这是爬虫视图达到的建议值而不是硬限制
AUTOTHROTTLE_TARGET_CONCURRENCY = 16.0
调试
#AUTOTHROTTLE_DEBUG = True
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
default_settings.py
爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
#DEPTH_LIMIT = 3
爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo
后进先出,深度优先
#DEPTH_PRIORITY = 0
#SCHEDULER_DISK_QUEUE = ‘scrapy.squeue.PickleLifoDiskQueue’
#SCHEDULER_MEMORY_QUEUE = ‘scrapy.squeue.LifoMemoryQueue’
先进先出,广度优先
#DEPTH_PRIORITY = 1
#SCHEDULER_DISK_QUEUE = ‘scrapy.squeue.PickleFifoDiskQueue’
#SCHEDULER_MEMORY_QUEUE = ‘scrapy.squeue.FifoMemoryQueue’
调度器队列
#SCHEDULER = ‘scrapy.core.scheduler.Scheduler’
#from scrapy.core.scheduler import Scheduler
访问URL去重
#DUPEFILTER_CLASS = ‘step8_king.duplication.RepeatUrl’
Enable or disable spider middlewares
#See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {‘baidu.middlewares.AmazonSpiderMiddleware’: 543,}
Enable or disable downloader middlewares
See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html DOWNLOADER_MIDDLEWARES = { # ‘baidu.middlewares.DownMiddleware1’: 543, }
Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {#‘scrapy.extensions.telnet.TelnetConsole’: None,#}
Configure item pipelines
#See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {‘baidu.pipelines.CustomPipeline’: 200, }
“”"
启用缓存
目的用于将已经发送的请求或相应缓存下来,以便以后使用
from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
from scrapy.extensions.httpcache import DummyPolicy
from scrapy.extensions.httpcache import FilesystemCacheStorage
“”"
是否启用缓存策略
#HTTPCACHE_ENABLED = True
缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
#HTTPCACHE_POLICY = “scrapy.extensions.httpcache.DummyPolicy”
缓存策略:
根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
#HTTPCACHE_POLICY = “scrapy.extensions.httpcache.RFC2616Policy”
缓存超时时间
#HTTPCACHE_EXPIRATION_SECS = 0
缓存保存路径
#HTTPCACHE_DIR = ‘httpcache’
缓存忽略的Http状态码
#HTTPCACHE_IGNORE_HTTP_CODES = []
缓存存储的插件
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage’
BOT_NAME 项目名称
CONCURRENT_ITEMS item处理最大并发数,默认100
CONCURRENT_REQUESTS 下载最大并发数
CONCURRENT_REQUESTS_PER_DOMAIN 单个域名最大并发数
CONCURRENT_REQUESTS_PER_IP 单个ip最大并发数
与下载文本数据类似,只不过,在进行下载和传递的时候,是二进制数据.
import os
import re
import scrapy
from ..items import BaiduimageItem
class BdimgspiderSpider(scrapy.Spider):
name = 'bdimgspider'
# allowed_domains = ['xxx']
start_urls = [
'https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=-1&st=-1&fm=result&fr=&sf=1&fmq=1612326333694_R&pv=&ic=0&nc=1&z=&hd=&latest=©right=&se=1&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&sid=&word=ai']
page_url = "https://image.baidu.com/search/acjson?tn=resultjson_com&logid=8226068269596264088&ipn=rj&ct=201326592&is=&fp=result&queryWord=ai&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=©right=&word=ai&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&" \
"pn={}&rn=30&gsm=1e&1612329005808="
num=0
page_num=0
def parse(self, response):
#解析page页 获取 图片url
img_urls=re.findall('"thumbURL":"(.*?)"',response.text)
print(img_urls)
#发请求 获取图片数据
for index,img_url in enumerate(img_urls):
yield scrapy.Request(img_url,callback=self.get_img)
self.page_num+=1
if self.page_num==4:
return
page_url=self.page_url.format(self.page_num*30)
yield scrapy.Request(page_url,callback=self.parse)
def get_img(self,response):
print("图片数据",response.status)
img_data=response.body #图片二进制数据
#一直接存储
if not os.path.exists("dirspider"):
os.mkdir("dirspider")
filename="dirspider/%s.jpg"%str(self.num)# 0 1 2 3
self.num+=1
with open(filename,"wb") as f:
f.write(img_data)
#使用管道存储
item=BaiduimageItem()
item["img_data"]=img_data
yield item
class BaiduimageItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
img_data=scrapy.Field()
class BaiduimagePipeline(object):
num=0
def process_item(self, item, spider):
if isinstance(item,BaiduimageItem):
if not os.path.exists("dirspider_pipe"):
os.mkdir("dirspider_pipe")
filename="dirspider_pipe/%s.jpg"%str(self.num)# 0 1 2 3
self.num+=1
with open(filename,"wb") as f:
f.write(item["img_data"])
return item
只需要注意开启 ITEM_PIPELINES 管道
# -*- coding: utf-8 -*-
BOT_NAME = 'baiduimage'
SPIDER_MODULES = ['baiduimage.spiders']
NEWSPIDER_MODULE = 'baiduimage.spiders'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'baiduimage.pipelines.BaiduimagePipeline': 300,
}
使用管道类下载图片的关键
import re
import scrapy
from ..items import BdImagePipeItem
class BdimgpipeSpider(scrapy.Spider):
name = 'bdimgpipe'
# allowed_domains = ['xxx']
start_urls = ['https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=-1&st=-1&sf=1&fmq=&pv=&ic=0&nc=1&z=&se=1&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&fm=index&pos=history&word=%E7%8C%AB%E5%92%AA']
page_url = "https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=%E7%8C%AB%E5%92%AA&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=©right=&word=%E7%8C%AB%E5%92%AA&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&" \
"pn={}&rn=30&gsm=5a&1590563737742="
page_num = 0
def parse(self, response):
# 解析page页 获取 图片url
img_urls = re.findall('"thumbURL":"(.*?)"', response.text)
print(img_urls)
item=BdImagePipeItem()
item["image_urls"]=img_urls
yield item
self.page_num += 1
if self.page_num == 3:
return
page_url = self.page_url.format(self.page_num * 30)
yield scrapy.Request(page_url, callback=self.parse)
class BdImagePipeItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
image_urls=scrapy.Field()
from scrapy.pipelines.images import ImagesPipeline
class BdImagePipeline(ImagesPipeline):
pass
# -*- coding: utf-8 -*-
BOT_NAME = 'baiduimage'
SPIDER_MODULES = ['baiduimage.spiders']
NEWSPIDER_MODULE = 'baiduimage.spiders'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'baiduimage.pipelines.BdImagePipeline': 100,
}
#媒体管道类 存储 当存储路径不存在的时候,会自动创建
IMAGES_STORE ='D:/data_analysis/python_spider/tzSpider/10_scrapy6/baiduimage/ImagePipedir'
IMAGES_THUMBS = {'small': (50, 50), 'big':(270, 270)} #缩略图设置
媒体管道实现了以下特性:
避免重新下载最近下载的媒体
指定存储位置(文件系统目录,Amazon S3 bucket,谷歌云存储bucket)
图像管道具有一些额外的图像处理功能:
ITEM_PIPELINES = {‘scrapy.pipelines.images.ImagesPipeline’: 1} 启用
FILES_STORE = ‘/path/to/valid/dir’ 文件管道存放位置
IMAGES_STORE = ‘/path/to/valid/dir’ 图片管道存放位置
FILES_URLS_FIELD = ‘field_name_for_your_files_urls’ 自定义文件url字段
FILES_RESULT_FIELD = ‘field_name_for_your_processed_files’ 自定义结果字段
IMAGES_URLS_FIELD = ‘field_name_for_your_images_urls’ 自定义图片url字段
IMAGES_RESULT_FIELD = ‘field_name_for_your_processed_images’ 结果字段
FILES_EXPIRES = 90 文件过期时间 默认90天
IMAGES_EXPIRES = 90 图片过期时间 默认90天
IMAGES_THUMBS = {‘small’: (50, 50), ‘big’:(270, 270)} 缩略图尺寸
IMAGES_MIN_HEIGHT = 110 过滤最小高度
IMAGES_MIN_WIDTH = 110 过滤最小宽度
MEDIA_ALLOW_REDIRECTS = True 是否重定向
提示:看中文注释
"""
Images Pipeline
See documentation in topics/media-pipeline.rst
"""
import functools
import hashlib
from io import BytesIO
from PIL import Image
from scrapy.utils.misc import md5sum
from scrapy.utils.python import to_bytes
from scrapy.http import Request
from scrapy.settings import Settings
from scrapy.exceptions import DropItem
#TODO: from scrapy.pipelines.media import MediaPipeline
from scrapy.pipelines.files import FileException, FilesPipeline
class NoimagesDrop(DropItem):
"""Product with no images exception"""
class ImageException(FileException):
"""General image error exception"""
class ImagesPipeline(FilesPipeline):
"""Abstract pipeline that implement the image thumbnail generation logic
"""
MEDIA_NAME = 'image'
# Uppercase attributes kept for backward compatibility with code that subclasses
# ImagesPipeline. They may be overridden by settings.
MIN_WIDTH = 0
MIN_HEIGHT = 0
EXPIRES = 90
THUMBS = {}
DEFAULT_IMAGES_URLS_FIELD = 'image_urls'
DEFAULT_IMAGES_RESULT_FIELD = 'images'
#解析settings里的配置字段
def __init__(self, store_uri, download_func=None, settings=None):
super(ImagesPipeline, self).__init__(store_uri, settings=settings,
download_func=download_func)
if isinstance(settings, dict) or settings is None:
settings = Settings(settings)
resolve = functools.partial(self._key_for_pipe,
base_class_name="ImagesPipeline",
settings=settings)
self.expires = settings.getint(
resolve("IMAGES_EXPIRES"), self.EXPIRES
)
if not hasattr(self, "IMAGES_RESULT_FIELD"):
self.IMAGES_RESULT_FIELD = self.DEFAULT_IMAGES_RESULT_FIELD
if not hasattr(self, "IMAGES_URLS_FIELD"):
self.IMAGES_URLS_FIELD = self.DEFAULT_IMAGES_URLS_FIELD
self.images_urls_field = settings.get(
resolve('IMAGES_URLS_FIELD'),
self.IMAGES_URLS_FIELD
)
self.images_result_field = settings.get(
resolve('IMAGES_RESULT_FIELD'),
self.IMAGES_RESULT_FIELD
)
self.min_width = settings.getint(
resolve('IMAGES_MIN_WIDTH'), self.MIN_WIDTH
)
self.min_height = settings.getint(
resolve('IMAGES_MIN_HEIGHT'), self.MIN_HEIGHT
)
self.thumbs = settings.get(
resolve('IMAGES_THUMBS'), self.THUMBS
)
@classmethod
def from_settings(cls, settings):
s3store = cls.STORE_SCHEMES['s3']
s3store.AWS_ACCESS_KEY_ID = settings['AWS_ACCESS_KEY_ID']
s3store.AWS_SECRET_ACCESS_KEY = settings['AWS_SECRET_ACCESS_KEY']
s3store.AWS_ENDPOINT_URL = settings['AWS_ENDPOINT_URL']
s3store.AWS_REGION_NAME = settings['AWS_REGION_NAME']
s3store.AWS_USE_SSL = settings['AWS_USE_SSL']
s3store.AWS_VERIFY = settings['AWS_VERIFY']
s3store.POLICY = settings['IMAGES_STORE_S3_ACL']
gcs_store = cls.STORE_SCHEMES['gs']
gcs_store.GCS_PROJECT_ID = settings['GCS_PROJECT_ID']
gcs_store.POLICY = settings['IMAGES_STORE_GCS_ACL'] or None
ftp_store = cls.STORE_SCHEMES['ftp']
ftp_store.FTP_USERNAME = settings['FTP_USER']
ftp_store.FTP_PASSWORD = settings['FTP_PASSWORD']
ftp_store.USE_ACTIVE_MODE = settings.getbool('FEED_STORAGE_FTP_ACTIVE')
store_uri = settings['IMAGES_STORE']
return cls(store_uri, settings=settings)
def file_downloaded(self, response, request, info):
return self.image_downloaded(response, request, info)
#图片下载
def image_downloaded(self, response, request, info):
checksum = None
for path, image, buf in self.get_images(response, request, info):
if checksum is None:
buf.seek(0)
checksum = md5sum(buf)
width, height = image.size
self.store.persist_file(
path, buf, info,
meta={'width': width, 'height': height},
headers={'Content-Type': 'image/jpeg'})
return checksum
#图片获取 图片大小的过滤 #缩略图的生成
def get_images(self, response, request, info):
path = self.file_path(request, response=response, info=info)
orig_image = Image.open(BytesIO(response.body))
width, height = orig_image.size
if width < self.min_width or height < self.min_height:#过滤图片
raise ImageException("Image too small (%dx%d < %dx%d)" %
(width, height, self.min_width, self.min_height))
image, buf = self.convert_image(orig_image)
yield path, image, buf
for thumb_id, size in self.thumbs.items():#生成缩略图
thumb_path = self.thumb_path(request, thumb_id, response=response, info=info)
thumb_image, thumb_buf = self.convert_image(image, size)
yield thumb_path, thumb_image, thumb_buf
# 转换成通用格式
def convert_image(self, image, size=None):
if image.format == 'PNG' and image.mode == 'RGBA':
background = Image.new('RGBA', image.size, (255, 255, 255))
background.paste(image, image)
image = background.convert('RGB')
elif image.mode == 'P':
image = image.convert("RGBA")
background = Image.new('RGBA', image.size, (255, 255, 255))
background.paste(image, image)
image = background.convert('RGB')
elif image.mode != 'RGB':
image = image.convert('RGB')
if size:
image = image.copy()
image.thumbnail(size, Image.ANTIALIAS)
buf = BytesIO()
image.save(buf, 'JPEG')
return image, buf
def convert_image(self, image, size=None):#转化图片格式
if image.format == 'PNG' and image.mode == 'RGBA':
background = Image.new('RGBA', image.size, (255, 255, 255))
background.paste(image, image)
image = background.convert('RGB')
elif image.mode == 'P':
image = image.convert("RGBA")
background = Image.new('RGBA', image.size, (255, 255, 255))
background.paste(image, image)
image = background.convert('RGB')
elif image.mode != 'RGB':
image = image.convert('RGB')
if size:
image = image.copy()
image.thumbnail(size, Image.ANTIALIAS)
buf = BytesIO()
image.save(buf, 'JPEG')
return image, buf
def get_media_requests(self, item, info):#生成媒体请求 可重写
#得到图片url 变成请求 发给引擎
return [Request(x) for x in item.get(self.images_urls_field, [])]
def item_completed(self, results, item, info):#此方法获取文件名 进行改写
if isinstance(item, dict) or self.images_result_field in item.fields:
item[self.images_result_field] = [x for ok, x in results if ok]
return item
def file_path(self, request, response=None, info=None):
image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
#url 是唯一 hash 之后 也是唯一 图片名字是唯一的
return 'full/%s.jpg' % (image_guid)
# 这个方法可以重写
def item_completed(self, results, item, info):
with suppress(KeyError):
ItemAdapter(item)[self.images_result_field] = [x for ok, x in results if ok]
return item
def file_path(self, request, response=None, info=None, *, item=None):
image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
# 将url进行hash。 url全球统一资源定位设置,所以是唯一的,所以图片名字是唯一的
return f'full/{image_guid}.jpg'
def thumb_path(self, request, thumb_id, response=None, info=None):
#缩略图的存储路径
thumb_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
return 'thumbs/%s/%s.jpg' % (thumb_id, thumb_guid)
在媒体管道中,我们可以重写的方法:
get_media_requests(item, info) 根据item中的file_urls/image_urls生成请求
def get_media_requests(self, item, info):
for file_url in item['file_urls']:
yield scrapy.Request(file_url)
item_completed(requests, item, info) 当item里的 所有媒体文件请求完成调用
from scrapy.exceptions import DropItem
def item_completed(self, results, item, info):
file_paths = [x['path'] for ok, x in results if ok]
if not file_paths:
raise DropItem("Item contains no files")
item['file_paths'] = file_paths
return item
import re
import scrapy
from ..items import BdImagePipeItem
class BdimgpipeSpider(scrapy.Spider):
name = 'bdimgpipe'
# allowed_domains = ['xxx']
start_urls = ['https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=-1&st=-1&sf=1&fmq=&pv=&ic=0&nc=1&z=&se=1&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&fm=index&pos=history&word=%E7%8C%AB%E5%92%AA']
page_url = "https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=%E7%8C%AB%E5%92%AA&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=©right=&word=%E7%8C%AB%E5%92%AA&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&" \
"pn={}&rn=30&gsm=5a&1590563737742="
page_num = 0
def parse(self, response):
# 解析page页 获取 图片url
img_urls = re.findall('"thumbURL":"(.*?)"', response.text)
print(img_urls)
item=BdImagePipeItem()
item["image_urls"]=img_urls
yield item
self.page_num += 1
if self.page_num == 3:
return
page_url = self.page_url.format(self.page_num * 30)
yield scrapy.Request(page_url, callback=self.parse)
class BdImagePipeItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
image_urls=scrapy.Field()
它的工作流是这样的:
1.在爬虫中,您可以返回一个item,并将所需的url放入file_urls字段。
2.item从爬虫返回并进入item管道。
3.当item到达文件管道时,file_urls字段中的url将使用标准的Scrapy调度器和下载程序(这意味着将重用调度器和下载程序中间件)计划下载, 但是具有更高的优先级,在其他页面被爬取之前处理它们。在文件下载完成(或由于某种原因失败)之前,该项在特定管道阶段保持“锁定”状态。
4.下载文件后,将使用另一个字段(files)填充results。这个字段将包含一个包含有关下载文件信息的dicts列表,例如下载的路径、原始的剪贴url(从file_urls字段中获得)和文件校验和。文件字段列表中的文件将保持原来file_urls字段的顺序。如果某些文件下载失败,将记录一个错误,文件将不会出现在files字段中。
import os
import scrapy
from itemadapter import ItemAdapter
from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem
from .settings import IMAGES_STORE
class BdImagePipeline(ImagesPipeline):
image_num=0
def get_media_requests(self, item, info):
print('item["image_urls"]',item["image_urls"])
for x in item["image_urls"]:
yield scrapy.Request(x)
def item_completed(self, results, item, info):
print("媒体管道类结果")
print(results)
# results[0][1]["path"]
for ok, x in results:
if ok:
print(x["path"])
images_path = [x["path"] for ok, x in results if ok]
for image_path in images_path:
# os.rename(IMAGES_STORE + "/" + image_path, IMAGES_STORE + "/" + str(self.image_num) + ".jpg")
os.rename(os.path.join(IMAGES_STORE, image_path), os.path.join(IMAGES_STORE, str(self.image_num) + ".png"))
self.image_num += 1
# -*- coding: utf-8 -*-
BOT_NAME = 'baiduimage'
SPIDER_MODULES = ['baiduimage.spiders']
NEWSPIDER_MODULE = 'baiduimage.spiders'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'baiduimage.pipelines.BdImagePipeline': 100,
}
#媒体管道类 存储 当存储路径不存在的时候,会自动创建
IMAGES_STORE ='D:/data_analysis/python_spider/tzSpider/10_scrapy6/baiduimage/ImagePipedir'
IMAGES_THUMBS = {'small': (50, 50), 'big':(270, 270)} #缩略图设置
分布式:一个业务分拆多个子业务,部署在不同的服务器上;是一种将任务分布在不同地方的工作方式.
作用:提高安全性和效率
分布式爬虫就是将任务分配给不同的机器,去完成.即多个服务器将任务分解多个目标内容,共同完成一个数据下载目标.
redis 下载地址
https://github.com/MicrosoftArchive/redis/releases
安装教程https://www.cnblogs.com/ttlx/p/11611086.html
设置
bind 0.0.0.0
启动服务与数据库
redis-server redis-cli
redis 简单命令
set key value 写入数据
get key 读取数据
lpush key value1 …一个或多个值插入到列表头部
keys * 获取所有的键 \
FLUSHDB # 清空数据库
这是一个redis的桌面工具,可以图形化界面操作redis数据库
下载安装redis Desktop Manager exe文件,不作介绍了
连接
scrapy_redis是一个基于Redis的Scrapy组件,用于scrapy项目的分布式部署和开发。
特点:
分布式爬取
可以启动多个spider对象,互相之间共享有一个redis的request队列。最适合多个域名的广泛内容的爬取。
分布式数据处理
爬取到的item数据被推送到redis中,这意味着你可以启动尽可能多的item处理程序。
scrapy即插即用
scrapy调度程序+过滤器,项目管道,base spider使用简单。
一般通过pip安装Scrapy-redis:
pip install scrapy-redis
scrapy-redis依赖:
Python 2.7, 3.4 or 3.5以上
Redis >= 2.8
Scrapy >= 1.1
scrapy-redis的使用非常简单,几乎可以并不改变原本scrapy项目的代码,只用做少量设置
启用调度将请求存储进redis
必须
SCHEDULER = “scrapy_redis.scheduler.Scheduler”
确保所有spider通过redis共享相同的重复过滤。
必须
DUPEFILTER_CLASS = “scrapy_redis.dupefilter.RFPDupeFilter”
公共管道
ITEM_PIPELINES = {
‘scrapy_redis.pipelines.RedisPipeline’ : 300
}
指定连接到Redis时要使用的主机和端口。
必须
REDIS_HOST = ‘localhost’
REDIS_PORT = 6379
不清理redis队列,允许暂停/恢复抓取。
可选 允许暂定,redis数据不丢失
SCHEDULER_PERSIST = True
官方文档:https://scrapy-redis.readthedocs.io/en/stable/
spidername:items
list类型,保存爬虫获取到的数据item内容是json字符串。
spidername:dupefilter
set类型,用于爬虫访问的URL去重内容是40个字符的url的hash字符串
spidername:start_urls
list类型,用于接收redis spider启动时的第一个url
spidername:requests
zset类型,用于存放requests等待调度。内容是requests对象的序列化字符串
spider作了简单改动,更重要的是在settings里作一些设置
代码作了简单改动
导出RedisSpider
类继承 RedisSpider
注销strat_urls 设置 redis_key = “db:start_urls” 开启爬虫钥匙
# -*- coding: utf-8 -*-
import json
import scrapy
from ..items import DbItem #是一个安全的字典
from scrapy_redis.spiders import RedisSpider #1. 导出RedisSpider
# class Db250Spider(scrapy.Spider):#继承基础类
class Db250Spider(RedisSpider):#2.继承redis_spider
name = 'db250' #爬虫文件名字 必须存在且唯一
# allowed_domains = ['movie.douban.com'] #允许的域名 可以不存在 不存在 任何域名都可以
# start_urls = ['https://movie.douban.com/top250']#初始url 必须要存在
redis_key = "db:start_urls" #3.开启爬虫的钥匙
page_num=0
def parse(self, response):#解析函数 处理响应数据
print("项目1解析的response",response.url)
print("项目1解析的response",response.url)
print("项目1解析的response",response.url)
node_list=response.xpath('//div[@class="info"]')
for node in node_list:
#电影名字
film_name=node.xpath("./div/a/span/text()").extract()[0]
#导演信息
director_name=node.xpath("./div/p/text()").extract()[0].strip()
#评分
score=node.xpath('./div/div/span[@property="v:average"]/text()').extract()[0]
#使用管道存储
item_pipe=DbItem() #创建Dbitem对象 当成字典来使用
item_pipe['film_name']=film_name
item_pipe['director_name']=director_name
item_pipe['score']=score
# yield item_pipe
# print("电影信息",dict(item_pipe))
# 电影简介
print("电影信息",item_pipe["film_name"])
detail_url = node.xpath('./div/a/@href').extract()[0]
yield scrapy.Request(detail_url,callback=self.get_detail,meta={"info":item_pipe})
#发送新一页的请求
#构造url
if response.meta.get("num"):
self.page_num = response.meta["num"]
self.page_num += 1
if self.page_num==4:
return
page_url="https://movie.douban.com/top250?start={}&filter=".format(self.page_num*25)
yield scrapy.Request(page_url,meta={"num":self.page_num})
def get_detail(self,response):
item=DbItem()
#解析详情页的response
#1.meta 会跟随response 一块返回 2.通过response.meta接收 3.通过update 添加到新的item对象中
info = response.meta["info"]
item.update(info)
#简介内容
description=response.xpath('//div[@id="link-report"]//span[@property="v:summary"]/text()').extract()[0].strip()
# print('description',description)
item["description"]=description
print(item["film_name"])
#通过管道保存
yield item
#目标数据 电影信息+ 获取电影简介数据 次级页面的网页源代码里
#请求流程 访问一级页面 提取电影信息+次级页面的url 访问次级页面url 从次级的数据中提取电影简介
#存储的问题 数据没有次序 需要使用 meta传参 保证 同一电影的信息在一起
import scrapy
class DbItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
film_name=scrapy.Field()
director_name=scrapy.Field()
score=scrapy.Field()
description=scrapy.Field()
import json
class DbPipeline(object):
def open_spider(self,spider):
#爬虫文件开启,此方法执行
self.f=open("film_pipe1.txt","w",encoding="utf-8")
def process_item(self, item, spider):
json_data=json.dumps(dict(item),ensure_ascii=False)+"\n"
self.f.write(json_data)
return item
def close_spider(self,spider):
# 爬虫文件关闭,此方法执行
self.f.close() #关闭文件
settings文件需要配置
公共的调度器 SCHEDULER
公共的过滤器 DUPEFILTER_CLASS
公共存储区域 redis
# -*- coding: utf-8 -*-
BOT_NAME = 'db'
SPIDER_MODULES = ['db.spiders']
NEWSPIDER_MODULE = 'db.spiders'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
DOWNLOAD_DELAY = 1
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
}
# 启用调度将请求存储进redis
# 必须
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# 确保所有spider通过redis共享相同的重复过滤。
# 必须
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
#公共管道
ITEM_PIPELINES = {
'scrapy_redis.pipelines.RedisPipeline':300,
'db.pipelines.DbPipeline': 200,
}
# 指定连接到Redis时要使用的主机和端口。
# 必须
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
# 不清理redis队列,允许暂停/恢复抓取。
# 可选 允许暂定,redis数据不丢失
SCHEDULER_PERSIST = True
#日志文件配置
# LOG_FILE="db_redis.log"
LOG_ENABLED=True
LOG_FORMAT='%(asctime)s [%(name)s] %(levelname)s: %(message)s'
LOG_DATEFORMAT='%Y'
LOG_ENCODING="utf-8"
LOG_LEVEL="INFO"
正常执行完毕后会执行scrapy.Spider关闭代码
def _set_crawler(self, crawler):
self.crawler = crawler
self.settings = crawler.settings
crawler.signals.connect(self.close, signals.spider_closed)
scrapy内部的信号系统会在爬虫耗尽内部队列中的request时,就会出发spider_idle信号。
scrapy_redis对应代码:
def spider_idle(self):
"""Schedules a request if available, otherwise waits."""
# XXX: Handle a sentinel to close the spider.
self.schedule_next_requests()
raise DontCloseSpider
因为抛出了DontCloseSpider异常,所以当触发spider_idle的时候整个爬虫不会停止。
在项目文件下新建一个extensions.py文件,同级目录有settings…
下面是通过Extensions对spider_idle方法进行重写
import logging
import time
from scrapy import signals
from scrapy.exceptions import NotConfigured
logger = logging.getLogger(__name__)
class RedisSpiderSmartIdleClosedExceptions(object):
def __init__(self, idle_number, crawler):
self.crawler = crawler
self.idle_number = idle_number
self.idle_list = []
self.idle_count = 0
@classmethod
def from_crawler(cls, crawler):
# 首先从setting中检查是否应该启用和提高扩展
# 否则不配置
if not crawler.settings.getbool('MYEXT_ENABLED'):
raise NotConfigured
# 配置仅支持RedisSpider,可以不配置
if not 'redis_key' in crawler.spidercls.__dict__.keys():
raise NotConfigured('Only supports RedisSpider')
# 获取配置中idle的时间片个数,默认为360个,30分钟
# idle_number = crawler.settings.getint('IDLE_NUMBER', 360)
idle_number = crawler.settings.getint('IDLE_NUMBER', 10)
# 实例化扩展对象
ext = cls(idle_number, crawler)
# 将扩展对象extensions连接到signals信号, 将signals.spider_idle 与 spider_idle() 方法关联起来。
crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)
crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)
crawler.signals.connect(ext.spider_idle, signal=signals.spider_idle)
# return the extension object,返回extensions实例
return ext
def spider_opened(self, spider):
logger.info("opened spider %s redis spider Idle, Continuous idle limit: %d", spider.name, self.idle_number)
def spider_closed(self, spider):
logger.info("closed spider %s, idle count %d , Continuous idle count %d",
spider.name, self.idle_count, len(self.idle_list))
def spider_idle(self, spider):
# 两种判断方法
self.idle_count += 1 # 空闲计数
self.idle_list.append(time.time()) # 每次触发 spider_idle时,记录下触发时间戳
idle_list_len = len(self.idle_list) # 获取当前已经连续触发的次数
# 判断redis中是否存在关联key,如果key被用完,则key就不存在
if idle_list_len > 2 and spider.server.exists(spider.redis_key):
self.idle_list = [self.idle_list[-1]]
elif idle_list_len > self.idle_number:
# 连续触发的次数达到配置次数后关闭爬虫
logger.info('\n continued idle number exceed {} Times'
'\n meet the idle shutdown conditions, will close the reptile operation'
'\n idle start time: {}, close spider time: {}'.format(
self.idle_number,self.idle_list[0], self.idle_list[0]))
# # 判断 当前触发时间与上次触发时间 之间的间隔是否大于5秒,如果大于5秒,说明redis 中还有key
# if idle_list_len > 2 and self.idle_list[-1] - self.idle_list[-2] > 6:
# self.idle_list = [self.idle_list[-1]]
#
# elif idle_list_len > self.idle_number:
# # 连续触发的次数达到配置次数后关闭爬虫
# logger.info('\n continued idle number exceed {} Times'
# '\n meet the idle shutdown conditions, will close the reptile operation'
# '\n idle start time: {}, close spider time: {}'.format(self.idle_number,
# self.idle_list[0], self.idle_list[0]))
# 执行关闭爬虫操作
self.crawler.engine.close_spider(spider, 'closespider_pagecount')
在settings.py 中添加以下配置, 请将 scrapy 替换为你的项目目录名。
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
'scrapy.extensions.RedisSpiderSmartIdleClosedExceptions': 500,
}
MYEXT_ENABLED=True # 开启扩展
IDLE_NUMBER=360 # 配置空闲持续时间单位为 360个 ,一个时间单位为5s
整体逻辑与分布式爬取豆瓣电影数据类似,爬取网站是纵横小说,持久化存储位置是mysql数据库.
项目1存储 novel与chapter两张表
项目2是novel_copy与chapter_copy两张表
novel_from_redis,chapter_from_redis是公共区域redis里的数据,即项目1和项目2的数据之和
settings里的配置 与 管道的存储设置 保证了分布式存储的可能
# -*- coding: utf-8 -*-
import datetime
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from ..items import NovelItem,ChapterItem,ContentItem
from scrapy_redis.spiders import RedisCrawlSpider #1.导出 RedisCrawlSpider
# class ZhSpider(CrawlSpider):
class ZhSpider(RedisCrawlSpider):#2.继承 RedisCrawlSpider
name = 'zh'
allowed_domains = ['book.zongheng.com']
# start_urls = ['http://book.zongheng.com/store/c0/c0/b0/u1/p1/v0/s1/t0/u0/i1/ALL.html']#起始的url
redis_key = "zh:start_urls" #3.设置 redis_key
#定义爬取规则 1.提取url(LinkExtractor对象) 2.形成请求 3.响应的处理规则
rules = (
Rule(LinkExtractor(allow=r'http://book.zongheng.com/book/\d+.html',restrict_xpaths='//div[@class="bookname"]'),
callback='parse_book', follow=True,process_links="process_booklink"),
Rule(LinkExtractor(allow=r'http://book.zongheng.com/showchapter/\d+.html'), callback='parse_catalog', follow=True,),
Rule(LinkExtractor(allow=r'http://book.zongheng.com/chapter/\d+/\d+.html',restrict_xpaths='//ul[@class="chapter-list clearfix"]'),
callback='get_content', follow=False,process_links="process_chpter"),
)
def process_booklink(self,links):
#处理 LinkExtractor 提取到的url
for index,link in enumerate(links):
if index==0:
print(index,link.url)
yield link
else:
return
def process_chpter(self,links):
for index,link in enumerate(links):
if index<=20:
yield link
else:
return
def parse_book(self, response):
category = response.xpath('//div[@class="book-label"]/a/text()').extract()[1]
book_name = response.xpath('//div[@class="book-name"]/text()').extract()[0].strip()
author = response.xpath('//div[@class="au-name"]/a/text()').extract()[0]
status = response.xpath('//div[@class="book-label"]/a/text()').extract()[0]
book_nums = response.xpath('//div[@class="nums"]/span/i/text()').extract()[0]
description = ''.join(response.xpath('//div[@class="book-dec Jbook-dec hide"]/p/text()').re("\S+"))
c_time = datetime.datetime.now()
book_url = response.url
catalog_url = response.css("a").re('http://book.zongheng.com/showchapter/\d+.html')[0]
item = NovelItem()
item["category"] = category
item["book_name"] = book_name
item["author"] = author
item["status"] = status
item["book_nums"] = book_nums
item["description"] = description
item["c_time"] = c_time
item["book_url"] = book_url
item["catalog_url"] = catalog_url
yield item
def parse_catalog(self,response):
a_tags = response.xpath('//ul[@class="chapter-list clearfix"]/li/a')
chapter_list = []
catalog_url = response.url
print("项目1解析catalog_url")
for a in a_tags:
title = a.xpath("./text()").extract()[0]
chapter_url = a.xpath("./@href").extract()[0]
chapter_list.append((title, chapter_url, catalog_url))
item = ChapterItem()
item["chapter_list"] = chapter_list
yield item
def get_content(self,response):
chapter_url = response.url
content = ''.join(response.xpath('//div[@class="content"]/p/text()').extract())
c_time = datetime.datetime.now()
# 向管道传递数据
item = ContentItem()
item["chapter_url"] = chapter_url
item["content"] = content
yield item
import scrapy
class ZonghengItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass
class NovelItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
category = scrapy.Field()
book_name = scrapy.Field()
author = scrapy.Field()
status = scrapy.Field()
book_nums = scrapy.Field()
description = scrapy.Field()
c_time = scrapy.Field()
book_url = scrapy.Field()
catalog_url = scrapy.Field()
class ChapterItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
chapter_list = scrapy.Field()
catalog_url = scrapy.Field()
class ContentItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
content = scrapy.Field()
chapter_url = scrapy.Field()
项目一 存储进 novel 与 chapter
项目二 存储进 novel_copy 与 chapter_copy
项目一与项目二在pipelines中的不同就是存储表的不同
import pymysql
from zongheng.items import NovelItem,ChapterItem,ContentItem
import datetime
from scrapy.exceptions import DropItem
class ZonghengPipeline(object):
#连接数据库
def open_spider(self,spider):
data_config = spider.settings["DATABASE_CONFIG"]
print("数据库内容",data_config)
if data_config["type"] == "mysql":
self.conn = pymysql.connect(**data_config["config"])
self.cursor = self.conn.cursor()
spider.conn = self.conn
spider.cursor = self.cursor
#数据存储
def process_item(self, item, spider):
#1.小说信息存储
if isinstance(item,NovelItem):
sql="select id from novel where book_name=%s and author=%s"
self.cursor.execute(sql,(item["book_name"],item["author"]))
if not self.cursor.fetchone():
#写入小说数据
sql="insert into novel(category,book_name,author,status,book_nums,description,c_time,book_url,catalog_url)" \
"values (%s,%s,%s,%s,%s,%s,%s,%s,%s)"
self.cursor.execute(sql,(
item["category"],
item["book_name"],
item["author"],
item["status"],
item["book_nums"],
item["description"],
item["c_time"],
item["book_url"],
item["catalog_url"],
))
self.conn.commit()
return item
#2.章节信息存储
elif isinstance(item,ChapterItem):
#写入 目录信息
sql = "insert into chapter(title,ordernum,c_time,chapter_url,catalog_url) values(%s,%s,%s,%s,%s)"
data_list=[] #[(%s,%s,%s,%s,%s),(%s,%s,%s,%s,%s),(%s,%s,%s,%s,%s)]
for index,chapter in enumerate(item["chapter_list"]):
c_time = datetime.datetime.now()
ordernum=index+1
title,chapter_url,catalog_url=chapter
data_list.append((title,ordernum,c_time,chapter_url,catalog_url))
self.cursor.executemany(sql,data_list)
self.conn.commit()
return item
#3.章节内容存储
elif isinstance(item, ContentItem):
sql="update chapter set content=%s where chapter_url=%s"
content=item["content"]
chapter_url=item["chapter_url"]
print("项目1章节url",chapter_url)
self.cursor.execute(sql,(content,chapter_url))
self.conn.commit()
return item
else:
return DropItem
#关闭数据库
def close_spider(self,spider):
data_config=spider.settings["DATABASE_CONFIG"]#setting里设置数据库
if data_config["type"]=="mysql":
self.cursor.close()
self.conn.close()
# -*- coding: utf-8 -*-
BOT_NAME = 'zongheng'
SPIDER_MODULES = ['zongheng.spiders']
NEWSPIDER_MODULE = 'zongheng.spiders'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
DOWNLOAD_DELAY = 1
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
}
#本地数据库
DATABASE_CONFIG={
"type":"mysql",
"config":{
"host":"localhost",
"port":3306,
"user":"root",
"password":"123789",
"db":"zhnovel",
"charset":"utf8"
}
}
# 启用调度将请求存储进redis
# 必须
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# 确保所有spider通过redis共享相同的重复过滤。
# 必须
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
#公共管道 与 本地管道
ITEM_PIPELINES = {
'scrapy_redis.pipelines.RedisPipeline':300,
'zongheng.pipelines.ZonghengPipeline': 200,
}
# 指定连接到Redis时要使用的主机和端口。
# 必须
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
# 不清理redis队列,允许暂停/恢复抓取。
# 可选 允许暂定,redis数据不丢失
SCHEDULER_PERSIST = True
#日志文件配置
# LOG_FILE="zh_redis.log"
LOG_ENABLED=True
LOG_FORMAT='%(asctime)s [%(name)s] %(levelname)s: %(message)s'
LOG_DATEFORMAT='%Y'
LOG_ENCODING="utf-8"
LOG_LEVEL="INFO"
mysql清空表
TRUNCATE TABLE chapter;
TRUNCATE TABLE chapter_copy1;
TRUNCATE TABLE chapter_copy2;
TRUNCATE TABLE novel;
TRUNCATE TABLE novel_copy1;
TRUNCATE TABLE novel_copy2;
import datetime
import redis
import pymysql
import json
#指定redis数据库信息
rediscli=redis.StrictRedis(host="localhost",port=6379,db=0)
#指定mysql数据库
mysqlconn=pymysql.connect(host="localhost",port=3306,user="root",password="123789",db="zhnovel",charset="utf8")
while True:
#取出数据
source, data = rediscli.blpop(["zh:items"])
print(source, data)
item = json.loads(data)
print(item)
#写入数据
cursor=mysqlconn.cursor()
if b"book_name" in data:
sql = "select id from novel_from_redis where book_name=%s and author=%s"
cursor.execute(sql, (item["book_name"], item["author"]))
if not cursor.fetchone():
#写入小说数据
sql="insert into novel_from_redis(category,book_name,author,status,book_nums,description,c_time,book_url,catalog_url)" \
"values (%s,%s,%s,%s,%s,%s,%s,%s,%s)"
cursor.execute(sql,(
item["category"],
item["book_name"],
item["author"],
item["status"],
item["book_nums"],
item["description"],
item["c_time"],
item["book_url"],
item["catalog_url"],
))
mysqlconn.commit()
cursor.close()
elif b"chapter_list" in data:
sql = "insert into chapter_from_redis(title,ordernum,c_time,chapter_url,catalog_url) values(%s,%s,%s,%s,%s)"
data_list = [] # [(%s,%s,%s,%s,%s),(%s,%s,%s,%s,%s),(%s,%s,%s,%s,%s)]
for index, chapter in enumerate(item["chapter_list"]):
c_time = datetime.datetime.now()
ordernum = index + 1
title, chapter_url, catalog_url = chapter
data_list.append((title, ordernum, c_time, chapter_url, catalog_url))
cursor.executemany(sql, data_list)
mysqlconn.commit()
cursor.close()
elif b"content" in data:
sql = "update chapter_from_redis set content=%s where chapter_url=%s"
content = item["content"]
chapter_url = item["chapter_url"]
cursor.execute(sql, (content, chapter_url))
mysqlconn.commit()
cursor.close()
API
scrapyd的web界面比较简单,主要用于监控,所有的调度工作全部依靠接口实现.
官方文档:
https://scrapyd.readthedocs.io/en/latest/
https://scrapyd.readthedocs.io/en/latest/api.html
scrapyd设置
Scrapyd在以下位置搜索配置文件,并按顺序解析它们,最新的配置文件具有更高的优先级:
/etc/scrapyd/scrapyd.conf (Unix)
c:\scrapyd\scrapyd.conf (Windows)
/etc/scrapyd/conf.d/* (in alphabetical order, Unix)
scrapyd.conf
~/.scrapyd.conf (users home directory)
命令1:
pip install scrapyd
验证:
输入 scrapyd ,可以点击页面 则成功http://localhost:6800
命令2:
pip install scrapyd-client
验证:
到 scrapy 项目下面,输入 scrapyd-deploy 出现 Unknown target: default
因为本人使用anaconda管理python库,所以python的库文件都在anaconda目录下
查找scrapyd的路径
find / -name scrapyd
查到两个结果,
一个是scrapyd的执行程序
一个是scrapyd的库文件
修改scrapyd的配置,修改成允许远程访问
cd /root/anaconda3/lib/python3.6/site-packages/scrapyd
修改目录下的default_scrapyd.conf的bind_address修改成0.0.0.0允许远程访问
[deploy:spider] #部署名称 可以无 默认为空
url = http://localhost:6800/ #url 必须有 可以是远程服务器
project = zongheng #项目名称 不要删掉
username=xxx #访问服务器需要的用户名和密码 (可以不写)
password=xxx
终端输入命令 scrapyd-deploy -l
查看设置的部署名称 和 url
http://localhost:6800/listprojects.json
位置: 在有scrapy.cfg 的目录下
输入 scrapyd-deploy spider(部署名称) -p zongheng(项目名称)
scrapyd-deploy spider -p Scrapy
终端显示如下:
Deploying to project "zongheng" in http://localhost:6800/addversion.json
Server response (200):
{"node_name": "YNRBYA8RP4AT92A", "status": "ok", "project": "zongheng", "version": "1595508145", "spiders": 1}
输入 http://127.0.0.1:6800 网页端显示如下:
项目名称 爬虫名称
curl http://localhost:6800/schedule.json -d project=Scrapy -d spider=zh
curl http://localhost:6800/schedule.json -d project=Scrapy -d spider=zh
显示如下:
{“node_name”: “YNRBYA8RP4AT92A”, “status”: “ok”, “jobid”: “98785578cce211eab46598fa9b72ce54”}
则在网页端可以查看爬虫
curl http://localhost:6800/cancel.json -d project=douban -d job=4fc26e4209da11e9b344000c292b8398
项目名称 jobid
curl http://localhost:6800/cancel.json -d project=zongheng -d job=4fc26e4209da11e9b344000c292b8398
问题: scrapyd-deploy
不是内部或者是外部命令
解决方法:
使用 where scrapyd-deploy
找到如下两个文件,并对这两个文件进行更改
更改两个文件 设置如下:
scrapyd.bat 内容
@echo off
"C:\ProgramData\Anaconda3\envs\spider\python.exe" "C:\ProgramData\Anaconda3\envs\spider\Scripts\scrapyd-deploy" %*
scrapyd-deploy.bat 内容
@echo off
"C:\ProgramData\Anaconda3\envs\spider\python.exe" "C:\ProgramData\Anaconda3\envs\spider\Scripts\scrapyd-deploy" %*
/Users/lijundong/opt/anaconda3/envs/reptile/bin/scrapyd-deploy:23: ScrapyDeprecationWarning: Module scrapy.utils.http
is deprecated, Please import from w3lib.http
instead.
from scrapy.utils.http import basic_auth_header
from w3lib.http import basic_auth_header