全站式增量式数据爬取

创建爬虫文件夹及其架构

详细步骤:

  1. cd 到moviezls新建的文件夹下
  2. scrapy startproject movies(文件名)
  3. cd movies
  4. scrapy genspider -t crawl av www.baidu.com

创建好框架后 ,在pycharm 的project interinter 配置环境

打开 av.py 编写具体爬虫语句

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from redis import Redis
from zls.items import ZlsItem

class AvSpider(CrawlSpider):
conn = Redis(‘127.0.0.1’, 6379)
name = ‘av’
# allowed_domains = [‘www.baidu.com’]
start_urls = [‘https://www.4567tv.co/list/index1.html’]
link = LinkExtractor(allow=r’/list/index1-\d+.html’) # 此处为对应的页码

rules = (
    Rule(link, callback='parse_item', follow=True),
)

def parse_item(self, response):
    print(11111111111111111111111111111111)
    print(response)
    li_list = response.xpath('//div[contains(@class,"index-area")]/ul/li')
    for li in li_list:
        mv_link = 'https://www.4567tv.co' + li.xpath('./a/@href').extract_first()
       # 向redis的集合中添加数据时, 如果数据不存在, 返回1, 如果数据存在, 返回0
        ret = self.conn.sadd('mv_link', mv_link)
        if ret:
            print('有数据更新......................................')
            yield scrapy.Request(url=mv_link, callback=self.parse_detail)
        else:
            print('没有数据更新, 无需爬取!!!!!!!!!!!!!!!!!!!!!!!!!!!')

def parse_detail(self, response):
    title = response.xpath('//h1[@class="title"]/text()').extract_first()
    item = ZlsItem()
    item['title'] = title
    print(item)
    yield item

配置settings.py

1.配置USER_AGENT 例如:
USER_AGENT = ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36’

2.ROBOTSTXT_OBEY = False

3.打开ITEM_PIPELINES= {
‘zls.pipelines.ZlsPipeline’: 300,
}

item.py

import scrapy

class ZlsItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()

你可能感兴趣的:(全站式增量式数据爬取)