scrapy框架详解四 管道 及 settings文件使用

Item Pipeline简介:

Item管道的主要责任是负责处理有蜘蛛从网页中抽取的Item,他的主要任务是清晰、验证和存储数据。

当页面被蜘蛛解析后,将被发送到Item管道,并经过几个特定的次序处理数据。

每个Item管道的组件都是有一个简单的方法组成的Python类。

他们获取了Item并执行他们的方法,同时他们还需要确定的是是否需要在Item管道中继续执行下一步或是直接丢弃掉不处

执行的过程:

清理HTML数据验证解析到的数据(检查Item是否包含必要的字段)检查是否是重复数据(如果重复就删除)将解析到的数据存储到数据库中

process_item(item, spider)

每一个item管道组件都会调用该方法,并且必须返回一个item对象实例或raise DropItem异常。

被丢掉的item将不会在管道组件进行执行

此外,我们也可以在类中实现以下方法

open_spider(spider)

当spider执行的时候将调用该方法

close_spider(spider)

当spider关闭的时候将调用该方法

在settings.py文件中,往ITEM_PIPELINES中添加项目管道的类名,就可以激活项目管道组件

如:

ITEM_PIPELINES = {

'myproject.pipeline.PricePipeline':300,

'myproject.pipeline.JsonWriterPipeline':800,

}

在此设置中分配给类的整数值决定了它们在其中运行的顺序——项通过管道从订单号低到高

整数值通常设置在0-1000之间


setting文件详解:


# 1. 爬虫名称BOT_NAME = 'step8_king'# 

2. 爬虫应用路径SPIDER_MODULES = ['step8_king.spiders']NEWSPIDER_MODULE = 'step8_king.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent# 

3. 客户端 user-agent请求头# USER_AGENT = 'step8_king (+http://www.yourdomain.com)'# Obey robots.txt rules# 

4. 禁止爬虫配置# ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)# 

5. 并发请求数# CONCURRENT_REQUESTS = 4# Configure a delay for requests for the same website (default: 0)# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay# See also autothrottle settings and docs

#6. 延迟下载秒数# DOWNLOAD_DELAY = 2# The download delay setting will honor only one of:

# 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名# CONCURRENT_REQUESTS_PER_DOMAIN = 2# 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP# CONCURRENT_REQUESTS_PER_IP = 3# Disable cookies (enabled by default)

# 8. 是否支持cookie,cookiejar进行操作cookie# COOKIES_ENABLED = True# COOKIES_DEBUG = True# Disable Telnet Console (enabled by default)

# 9. Telnet用于查看当前爬虫的信息,操作爬虫等...#    使用telnet ip port ,然后通过命令操作# TELNETCONSOLE_ENABLED = True# TELNETCONSOLE_HOST = '127.0.0.1'# TELNETCONSOLE_PORT = [6023,]

# 10. 默认请求头# Override the default request headers:# DEFAULT_REQUEST_HEADERS = {#     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',#     'Accept-Language': 'en',# }# Configure item pipelines# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html

# 11. 定义pipeline处理请求# ITEM_PIPELINES = {#    'step8_king.pipelines.JsonPipeline': 700,#    'step8_king.pipelines.FilePipeline': 500,# }

# 12. 自定义扩展,基于信号进行调用# Enable or disable extensions# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html# EXTENSIONS = {#     # 'step8_king.extensions.MyExtension': 500,# }

# 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度# DEPTH_LIMIT = 3

# 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo# 后进先出,深度优先# DEPTH_PRIORITY = 0# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'# 先进先出,广度优先# DEPTH_PRIORITY = 1# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'

# 15. 调度器队列# SCHEDULER = 'scrapy.core.scheduler.Scheduler'# from scrapy.core.scheduler import Scheduler

# 16. 访问URL去重# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'# Enable and configure the AutoThrottle extension (disabled by default)# See http://doc.scrapy.org/en/latest/topics/autothrottle.html"""

17. 自动限速算法   

 from scrapy.contrib.throttle import AutoThrottle    

自动限速设置    

1. 获取最小延迟 DOWNLOAD_DELAY   

 2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY   3. 设置初始下载延迟AUTOTHROTTLE_START_DELAY   

 4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间    

5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCY    

target_delay = latency /self.target_concurrency    

new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间    

new_delay = max(target_delay, new_delay)    

new_delay = min(max(self.mindelay, new_delay), self.maxdelay)    

slot.delay = new_delay

"""# 开始自动限速# AUTOTHROTTLE_ENABLED = True

# The initial download delay

# 初始下载延迟# AUTOTHROTTLE_START_DELAY = 5

# The maximum download delay to be set in case of high latencies

# 最大下载延迟# 

AUTOTHROTTLE_MAX_DELAY = 10

# The average number of requests Scrapy should be sending in parallel to each remote server

# 平均每秒并发数# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0

# Enable showing throttling stats for every response received:# 是否显示

# AUTOTHROTTLE_DEBUG = True# Enable and configure HTTP caching (disabled by default)

#Seehttp://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings"""

18. 启用缓存    

目的用于将已经发送的请求或相应缓存下来,以便以后使用        fromscrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware   

from scrapy.extensions.httpcache import DummyPolicy    

from scrapy.extensions.httpcache import FilesystemCacheStorage"""

# 是否启用缓存策略# 

HTTPCACHE_ENABLED = True

# 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可# 

HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"

# 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"

# 缓存超时时间

# HTTPCACHE_EXPIRATION_SECS = 0

# 缓存保存路径# HTTPCACHE_DIR = 'httpcache'

# 缓存忽略的Http状态码# HTTPCACHE_IGNORE_HTTP_CODES = []

# 缓存存储的插件# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'"""

19. 代理,需要在环境变量中设置    from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware        方式一:使用默认        

os.environ        

                {http_proxy:http://root:[email protected]:9999/                                                https_proxy:http://192.168.11.11:9999/       

                  }    

方式二:使用自定义下载中间件        

def to_bytes(text, encoding=None, errors='strict'):

        if isinstance(text, bytes):

            return text

        if not isinstance(text, six.string_types):

            raise TypeError('to_bytes must receive a unicode, str or bytes '                            'object, got %s' % type(text).__name__)

        if encoding is None:

            encoding = 'utf-8'

        return text.encode(encoding, errors)

class ProxyMiddleware(object):

        def process_request(self, request, spider):

            PROXIES = [

                {'ip_port': '111.11.228.75:80', 'user_pass': ''},

                {'ip_port': '120.198.243.22:80', 'user_pass': ''},

                {'ip_port': '111.8.60.9:8123', 'user_pass': ''},

                {'ip_port': '101.71.27.120:80', 'user_pass': ''},

                {'ip_port': '122.96.59.104:80', 'user_pass': ''},

                {'ip_port': '122.224.249.122:8088', 'user_pass': ''},

            ]

            proxy = random.choice(PROXIES)

            if proxy['user_pass'] is not None:

                request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])                            encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))                request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)

                print "**************ProxyMiddleware have pass************" + proxy['ip_port']

            else:

                print "**************ProxyMiddleware no pass************" + proxy['ip_port']                request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])        DOWNLOADER_MIDDLEWARES = {       'step8_king.middlewares.ProxyMiddleware': 500,    }    """"""

20. Https访问

    Https访问时有两种情况:

    1. 要爬取网站使用的可信任证书(默认支持)        DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"        DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"            

2. 要爬取网站使用的自定义证书        

DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"        DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"

 # https.py

from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory        from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)               

 class MySSLFactory(ScrapyClientContextFactory):

            def getCertificateOptions(self):

                from OpenSSL import crypto               

                 v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())               

                 v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())

                return CertificateOptions(

                    privateKey=v1,  # pKey对象

                    certificate=v2,  # X509对象

                   verify=False,

                    method=getattr(self, 'method', getattr(self, '_ssl_method', None))                )    其他:

        相关类

            scrapy.core.downloader.handlers.http.HttpDownloadHandler                        scrapy.core.downloader.webclient.ScrapyHTTPClientFactory                                       scrapy.core.downloader.contextfactory.ScrapyClientContextFactory        相关配置            

        DOWNLOADER_HTTPCLIENTFACTORY                        DOWNLOADER_CLIENTCONTEXTFACTORY""""""

你可能感兴趣的:(scrapy框架详解四 管道 及 settings文件使用)