Scrapy简介及其用法

Scrapy 框架
Scrapy是用纯Python实现一个为了爬取网站数据、提取结构性数据而编写的应用框架,用途非常广泛。
框架的力量,用户只需要定制开发几个模块就可以轻松的实现一个爬虫,用来抓取网页内容以及各种图片,非常之方便。
Scrapy 使用了 Twisted['twɪstɪd] 异步网络框架来处理网络通讯,可以加快我们的下载速度,不用自己去实现异步框架,并且包含了各种中间件接口,可以灵活的完成各种需求。
Scrapy架构图


5c17862075585.png

Windows 安装方式
Python 3
升级pip版本:
pip3 install --upgrade pip
通过pip 安装 Scrapy 框架
pip3 install Scrapy

Ubuntu 安装方式
通过pip3 安装 Scrapy 框架
sudo pip3 install scrapy
如果安装不成功再试着添加这些依赖库:
安装非Python的依赖
sudo apt-get install python3-dev python-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev
安装后,只要在命令终端输入 scrapy,提示类似以下结果,代表已经安装成功


5bd6f1c48598d.png

创建语句

创建爬虫项目
scrapy startproject jobboleproject
新建爬虫文件
scrapy genspider jobbole jobbole.com
启动爬虫
scrapy crawl jobbole

item.py文件

import scrapy


class XiachufangspiderItem(scrapy.Item):
    # define the fields for your item here like:
    name = scrapy.Field()
    img = scrapy.Field()
    yongliao = scrapy.Field()
    zuofa = scrapy.Field()
path = scrapy.Field()

pipelines.py文件

import json
import scrapy
from scrapy.pipelines.images import ImagesPipeline
from scrapy.utils.project import get_project_settings
import os

class XiachufangspiderPipeline(object):
    def __init__(self):
        self.f = open('xiachufang.json', 'a')
    def process_item(self, item, spider):
        self.f.write(json.dumps(dict(item), ensure_ascii=False)+'\n')
    return item

def close_spider(self, spider):
    self.f.close()
#把这个配置路径拿到
IMAGES_STORE = get_project_settings().get('IMAGES_STORE')
#下载图片
class XiachufangImgspiderPipeline(ImagesPipeline):
    def get_media_requests(self, item, info):
        #发起图片请求,把结果回调给item_completed
        return scrapy.Request(url=item['img'])

    def item_completed(self, results, item, info):
        # for ok,x in results:
        #     if ok:
        #          x['path']
     if imgs:
        os.rename(IMAGES_STORE + imgs[0],
                  IMAGES_STORE + item['name'] + '.jpg')
        item['path'] = os.getcwd() + '/' + IMAGES_STORE + item['name'] + '.jpg'
    else:
        item['path'] = ""
    return item

xiachufang.py文件

# -*- coding: utf-8 -*-
import scrapy
from XiachufangSpider.items import XiachufangspiderItem

class XiachufangSpider(scrapy.Spider):
    name = 'xiachufang'
    allowed_domains = ['xiachufang.com']
    start_urls = ['http://www.xiachufang.com/category/40076/']

def parse(self, response):
    div_list = response.xpath('//div[@class="pure-u-3-4 category-recipe-list"]//div[@class="normal-recipe-list"]//li')
    for div in div_list:
        url = div.xpath('.//p[@class="name"]/a/@href').extract_first('')
        print(url)

        yield scrapy.Request(url='http://www.xiachufang.com' + url, callback=self.parseDetail)


def parseDetail(self, response):
    item = XiachufangspiderItem()
    name = response.xpath('//h1/text()').extract_first('').replace('\n', '').strip()
    img = response.xpath('//div[@class="cover image expandable block-negative-margin"]/img/@src').extract_first('').replace('\n', '').strip()
    yongliao = ''.join(response.xpath('//tr/td//text()').extract()).replace('\n', '').replace(' ', '')
    zuofa = ''.join(response.xpath('//div[@class="steps"]//p/text()').extract())

    item['name'] = name
    item['img'] = img
    item['yongliao'] = yongliao
    item['zuofa'] = zuofa
    print(item)
    yield item

你可能感兴趣的:(Scrapy简介及其用法)