Scrapy笔记

入门

http://scrapy-chs.readthedocs.org/zh_CN/1.0/intro/tutorial.html

命令行工具

http://scrapy-chs.readthedocs.org/zh_CN/1.0/topics/commands.html#id1
genspider 创建新爬虫

scrapy parse 和scrapy shell 使用类似,用来测试爬取到的内容

Spider

url提取规则

http://scrapy-chs.readthedocs.org/zh_CN/1.0/topics/spiders.html#crawling-rules
使用CrawlSpider可以用rules提取链接和发送到指定的parse
一般情况下,不覆盖默认的parse()方法,新写一个方法提取item
,让list页面自动回调默认parse()后,返回response,之后会继续用rules提取链接.
最新更新:LinkExtractorLxmlLinkExtractor应用起来差不多,后者可以使用正则提取,其他方法弃用.

相对链接

from scrapy.utils.response import get_base_url
from urlparse import urljoin
base_url           = get_base_url(response)
relative_url       = site.select('//*[@id="showImage"]/@src').extract()
item['image_urls'] = [urljoin(base_url,ru) for ru in relative_url]

Item Pipeline

http://scrapy-chs.readthedocs.org/zh_CN/1.0/topics/item-pipeline.html#id1

  1. 字符转换的pipeline模块见下方数据存储

setting

http://scrapy-chs.readthedocs.org/zh_CN/1.0/topics/settings.html#spider-middlewares

Requests and Responses

http://scrapy-chs.readthedocs.org/zh_CN/1.0/topics/request-response.html

Request meta

  1. 添加代理
    `request.meta['proxy'] = "http://xxx.xx.xxx.xx:xxxx"

Response

url = response.url

下载器中间件

缓存

HTTPCACHE_ENABLED=True
HTTPCACHE_EXPIRATION_SECS=0
HTTPCACHE_DIR='dbhttpcache'
HTTPCACHE_IGNORE_HTTP_CODES=[301,302,403,404,500,502,503]
HTTPCACHE_STORAGE='scrapy.extensions.httpcache.FilesystemCacheStorage'

proxy

https://husless.github.io/2015/07/01/using-scrapy-with-proxies/
HttpProxyMiddleware and RetryMiddleware

数据存储

Feed exports

json & jsonlines

自定义pipelines中也可以直接输出到json文件

scrapy crawl spidername -o items.json -t jsonlines

FEED_URI = 'douban.json'
FEED_FORMAT = 'jsonlines'

json数据直接存储为字符串

不要先loads成字典list,后dumps,这样存到数据库之后取出loads时可能会有异常。采集百度视频的时候遇到

编码转换

存储json数据为中文
主要参考:

http://git.oschina.net/ldshuang/imax-spider/commit/1d05d7bafdf7758f7b422cc1133abf493bf55086
http://caiknife.github.io/blog/2013/08/02/scrapy-json-solution/

  1. 添加pipelines
class JsonWriterPipeline(object):

    def __init__(self):
        self.file = codecs.open('douban.json', 'w', encoding='utf-8')

    def process_item(self, item, spider):
        line = json.dumps(dict(item)) + "\n"
        self.file.write(line.decode('unicode_escape'))
        item = {"haha":"hehe"}
        return {"log":"可以不需要return数据了,返回的数据会再次转成Unicode,交给系统自带的输出"}
  1. 再setting.py启用ITEM_PIPELINES
ITEM_PIPELINES = {
    'projectname.pipelines.JsonWriterPipeline':800,
}
  1. response.body_as_unicode()
    在某些情况下可能要使用response.body_as_unicode()

日志

旧版本弃用,更新:http://doc.scrapy.org/en/latest/topics/logging.html

import logging
logging.warning("This is a warning")

Debug

http://scrapy-chs.readthedocs.org/zh_CN/1.0/topics/debug.html
scrapy parse 可以检查spider输出

待跟进

DjangoItem

scrapyd

服务器运行

不要nohub之后存储日志(太大)
nohup scrapy crawl mbaidu_spider -s JOBDIR=job/mbaidu-18-1 >/dev/null 2>&1 &

你可能感兴趣的:(Scrapy笔记)