每次有新的东西要爬总是忘了步骤要重新百度一遍,现在自己记录一下,方便以后看。
首先把要爬的东西封装成Item
,在item.py
里定义。这步是为了之后方便在pipelines
里处理item
。
import scrapy
class MaterialInfo(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
areaCode = scrapy.Field()
deptName = scrapy.Field()
qlName = scrapy.Field()
qlInnerCode = scrapy.Field()
materialName = scrapy.Field()
materialForm = scrapy.Field()
这里主要做两件事,一个是给爬虫入口让他去请求;另一个是对爬下来的代码封装成item
传到pipeline
里。
请求的时候有个小坑:header里面,不能有Content-Type
,因为这个需要scrapy
自己计算,不然POST的时候会返回400。
start_requests
是爬虫的入口,在这里调用Request来请求。
parse
里接收返回结果,在此可以把新链接加入队列,也可以封装item
,具体的scrapy
会自己判断。我这里只需要封装,不需要新链接入队。
import scrapy
import traceback
from QlsxCrawl.items import MaterialInfo
import json
from scrapy.http import FormRequest
class QlsxSpider(scrapy.Spider):
name = 'QlsxSpider'
def start_requests(self):
url = ''
with open('code', 'r') as fp:
seq = fp.readlines()
for innerCode in seq:
innerCode = innerCode.strip()
data = {
'qlInnerCode': innerCode}
yield scrapy.Request(url, method='POST', body=json.dumps(data), callback=self.parse, meta={
'qlInnerCode': innerCode})
# yield FormRequest(url=url, formdata=data, callback=self.parse, headers=self.headers)
def getBaseQlsx(self, js, meta):
baseItem = MaterialInfo()
baseItem['deptName'] = js['basicInfoDTO']['entityName']
baseItem['qlInnerCode'] = meta['qlInnerCode']
baseItem['areaCode'] = js['basicInfoDTO']['adCode']
baseItem['qlName'] = js['basicInfoDTO']['matName']
return baseItem
def parse(self, response):
try:
js = json.loads(response.text)['data']
materials = js['materialDTOs']
for mat in materials:
print(mat)
baseItem = self.getBaseQlsx(js, response.meta)
baseItem['materialName'] = mat['materialName']
if 'materialForm' in mat:
baseItem['materialForm'] = mat['materialForm']
yield baseItem
except Exception as e:
traceback.print_exc()
管道用于对item
进行处理,我这里是把item
存到数据库里。
具体过程写在process_item
里。
import scrapy
import pymysql
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
class QlsxcrawlPipeline(object):
cursor = None
connect = '1234'
def __init__(self):
self.connect = pymysql.connect(
host='localhost',
port=3306,
db='qlsx_crawl',
user='root',
password='',
charset='utf8mb4',
use_unicode=True)
self.cursor = self.connect.cursor()
def process_item(self, item:scrapy.Item, spider):
item.setdefault('materialForm', -1)
insertWords = 'insert into `material_infos`(`areaCode`, `deptName`, `qlname`, `qlInnerCode`, `materialName`, `materialForm`) values({}, {}, {}, {}, {}, {})'.format(item['areaCode'], repr(item['deptName']), repr(item['qlName']), repr(item['qlInnerCode']), repr(item['materialName']), item['materialForm'])
print(insertWords)
self.cursor.execute(insertWords)
self.connect.commit()
return item
主要是配置两个地方:
具体的代码就不贴了。