Spider模块:负责生成Request对象、解析Response对象、输出Item对象
Scheduler模块:负责对Request对象的调度
Downloader模块:负责发送Request请求,接收Response响应
ItemPipleline模块:负责数据的处理
scrapy Engine负责模块间的通信
各个模块和scrapy引擎之间可以添加一层或多层中间件,负责对出入该模块的UR2IM对象进行处理。
确定要爬取的数据(item)
找到数据所在页面的url
找到页面间的链接关系,确定如何跟踪(follow)页面
那么,我们一步一步来。
1 scrapy startproject DFVideo紧接着,我们创建一个爬虫:
scrapy genspider -t crawl DfVideoSpider eastday.com
这是我们发现在当前目录下已经自动生成了一个目录:DFVideo目录下包括如图文件:
spiders文件夹下,自动生成了名为DfVideoSpider.py的文件。
爬虫项目创建之后,我们来确定需要爬取的数据。在items.py中编辑:
import scrapy
class DfvideoItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
video_url = scrapy.Field()#视频源url
video_title = scrapy.Field()#视频标题
video_local_path = scrapy.Field()#视频本地存储路径
接下来,我们需要确定视频源的url,这是很关键的一步。现在许多的视频播放页面是把视频链接隐藏起来的,这就使得大家无法通过右键另存为,防止了视频别随意下载。但是只要视频在页面上播放了,那么必然是要和视频源产生数据交互的,所以只要稍微抓下包就能够发现玄机。这里我们使用fiddler抓包分析。发现其视频播放页的链接类似于:video.eastday.com/a/180926221513827264568.html?index3lbt视频源的数据链接类似于:mvpc.eastday.com/vyule/20180415/20180415213714776507147_1_06400360.mp4有了这两个链接,工作就完成了大半:在DfVideoSpider.py中编辑
# -*- coding: utf-8 -*-
import scrapy
from scrapy.loader import ItemLoader
from scrapy.loader.processors import MapCompose,Join
from DFVideo.items import DfvideoItem
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import time
from os import path
import os
class DfvideospiderSpider(CrawlSpider):
name = 'DfVideoSpider'
allowed_domains = ['eastday.com']
start_urls = ['http://video.eastday.com/']
rules = (
Rule(LinkExtractor(allow=r'video.eastday.com/a/\d+.html'),
callback='parse_item', follow=True),
)
def parse_item(self, response):
item = DfvideoItem()
try:
item["video_url"] = response.xpath('//input[@id="mp4Source"]/@value').extract()[0]
item["video_title"] = response.xpath('//meta[@name="description"]/@content').extract()[0]
#print(item)
item["video_url"] = 'http:' + item['video_url']
yield scrapy.Request(url=item['video_url'], meta=item, callback=self.parse_video)
except:
pass
def parse_video(self, response):
i = response.meta
file_name = Join()([i['video_title'], '.mp4'])
base_dir = path.join(path.curdir, 'VideoDownload')
video_local_path = path.join(base_dir, file_name.replace('?', ''))
i['video_local_path'] = video_local_path
if not os.path.exists(base_dir):
os.mkdir(base_dir)
with open(video_local_path, "wb") as f:
f.write(response.body)
yield i
至此,一个简单但强大的爬虫便完成了。如果你希望将视频的附加数据保存在数据库,可以在pipeline.py中进行相应的操作,比如存入mongodb中:
from scrapy import log
import pymongo
class DfvideoPipeline(object):
def __init__(self):
self.mongodb = pymongo.MongoClient(host='127.0.0.1', port=27017)
self.db = self.mongodb["DongFang"]
self.feed_set = self.db["video"]
# self.comment_set=self.db[comment_set]
self.feed_set.create_index("video_title", unique=1)
# self.comment_set.create_index(comment_index,unique=1)
def process_item(self, item, spider):
try:
self.feed_set.update({"video_title": item["video_title"]}, item, upsert=True)
except:
log.msg(message="dup key: {}".format(item["video_title"]), level=log.INFO)
return item
def on_close(self):
self.mongodb.close()
当然,你需要在setting.py中将pipelines打开:
ITEM_PIPELINES = {
'TouTiaoVideo.pipelines.ToutiaovideoPipeline': 300,
}