经过一段时间的学习,开始慢慢学会了使用scray简单的爬取数据。
这个项目起源是对污染数据的需求。
起初找到一个网站,尝试对其进行爬取,但是网站涉及到动态加载的问题,目前本人只学会了静态网站的爬取,所以放弃了。等后期学习后会返回进行尝试。网址为:
https://www.aqistudy.cn/historydata/
本项目实际使用的资源为:
http://www.tianqihoubao.com/aqi/
首先进行项目创建;
scrapy startproject Aqi
在建立好的scrapy容器里面建立一个spider:WRS,中间代码片如下:
import scrapy
from Aqi.items import AqiItem
class AQISpider(scrapy.Spider):
name = 'WRS'
start_urls = ['http://www.tianqihoubao.com/aqi/']
#这里必须为start_urls,两次项目都在这个s处寻找半天debug
# allowed_domains = ['tianqihoubao.com']
def parse(self, response):
for url in response.xpath('//*[@id="content"]/div/dl/dd/a/@href').extract()[9:]:
full_url = response.urljoin(url)
yield scrapy.Request(full_url, callback=self.parse_city)
def parse_city(self, response):
for url in response.xpath('//*[@id="bd"]/div[1]/div[3]/ul/li/a/@href').extract():
try:
full_url = response.urljoin(url)
finally:
yield scrapy.Request(full_url, callback=self.parse_month)
def parse_month(self, response):
#这里是一个对爬取到的数据的处理(求平均值),进行过3次尝试。还是基础知识不太扎实
def avera(ur):
av = 0
for i in ur:
av = av + float(i)
aver = av / len(ur)
return aver
item = AqiItem()
item['city'] = response.xpath('//*[@id="bd"]/div[2]/div[4]/h2/text()').extract()
#因为chrome浏览器的原因,造成网页源码里不含tbody;但是审查元素时浏览器会自动加上tbody,故直接删除tvbody即可
item['AQI'] = avera(response.xpath('//*[@id="content"]/div[3]/table/tr/td[3]/text()').extract()[1:])
item['PM25'] = avera(response.xpath('//*[@id="content"]/div[3]/table/tr/td[5]/text()').extract()[1:])
item['PM10'] = avera(response.xpath('//*[@id="content"]/div[3]/table/tr/td[6]/text()').extract()[1:])
item['So2'] = avera(response.xpath('//*[@id="content"]/div[3]/table/tr/td[7]/text()').extract()[1:])
item['No2'] = avera(response.xpath('//*[@id="content"]/div[3]/table/tr/td[8]/text()').extract()[1:])
item['Co'] = avera(response.xpath('//*[@id="content"]/div[3]/table/tr/td[9]/text()').extract()[1:])
item['O3'] = avera(response.xpath('//*[@id="content"]/div[3]/table/tr/td[10]/text()').extract()[1:])
yield item
#for aA in response.xpath('//*[@id="content"]/div[3]/table/tr/td[1]/text()').extract():
#AQI =
#for i in range(1, (1+len(response.xpath('//*[@id="content"]/div[3]/table/tr/td[1]/text()').extract()))):
#item = AqiItem()
#item['city'] = response.xpath('//*[@id="content"]/div[3]/h4/text()').extract()
#item['time'] = response.xpath('//*[@id="content"]/div[3]/table/tr/td[1]/text()').extract()[i]
#item['AQI'] = float(response.xpath('//*[@id="content"]/div[3]/table/tr/td[3]/text()').extract()[i])
#item['PM25'] = float(response.xpath('//*[@id="content"]/div[3]/table/tr/td[5]/text()').extract()[i])
#item['PM10'] = float(response.xpath('//*[@id="content"]/div[3]/table/tr/td[6]/text()').extract()[i])
# item['So2'] = float(response.xpath('//*[@id="content"]/div[3]/table/tr/td[7]/text()').extract()[i])
# item['No2'] = float(response.xpath('//*[@id="content"]/div[3]/table/tr/td[8]/text()').extract()[i])
# item['Co'] = float(response.xpath('//*[@id="content"]/div[3]/table/tr/td[9]/text()').extract()[i])
# item['O3'] = float(response.xpath('//*[@id="content"]/div[3]/table/tr/td[10]/text()').extract()[i])
# finally:
# yield item
#item = AqiItem()
#item['city'] = response.xpath('//*[@id="content"]/h1/text()').extract()
#item['time'] = response.xpath('//*[@id="content"]/div[3]/table/tr/td[1]/text()').extract()[1:]
#item['AQI'] = response.xpath('//*[@id="content"]/div[3]/table/tr/td[3]/text()').extract()[1:]
#item['PM25'] = response.xpath('//*[@id="content"]/div[3]/table/tr/td[5]/text()').extract()[1:]
#item['PM10'] = response.xpath('//*[@id="content"]/div[3]/table/tr/td[6]/text()').extract()[1:]
#item['So2'] = response.xpath('//*[@id="content"]/div[3]/table/tr/td[7]/text()').extract()[1:]
#item['No2'] = response.xpath('//*[@id="content"]/div[3]/table/tr/td[8]/text()').extract()[1:]
#item['Co'] = response.xpath('//*[@id="content"]/div[3]/table/tr/td[9]/text()').extract()[1:]
#item['O3'] = response.xpath('//*[@id="content"]/div[3]/table/tr/td[10]/text()').extract()[1:]
#yield item
在spider里面,主要问题就是两个问题:
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class AqiItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
city = scrapy.Field()
# time = scrapy.Field()
AQI = scrapy.Field()
PM25 = scrapy.Field()
PM10 = scrapy.Field()
So2 = scrapy.Field()
No2 = scrapy.Field()
Co = scrapy.Field()
O3 = scrapy.Field()
pass
对应item,新建一个PipeLine对返回的数据进行操作:
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import time
class AqiFilePipeLine(object):
def __init__(self):
self.file = open('d:/Aqi/wuqueshi.txt', 'wb')
def process_item(self, item, spider):
line = "%s,%s,%s,%s,%s,%s,%s,%s\r\n" % (item['city'], item['AQI'], item['PM25'],
item['PM10'], item['So2'], item['No2'], item['Co'], item['O3'])
self.file.write(line.encode("utf-8"))
#fp.write(item['city'] + '\t')
#fp.write(item['time'] + '\t')
#fp.write(item['AQI'] + '\t')
#fp.write(item['PM25'] + '\t')
#fp.write(item['PM10'] + '\t')
#fp.write(item['So2'] + '\t')
#fp.write(item['No2'] + '\t')
#fp.write(item['Co'] + '\t')
#fp.write(item['O3'] + '\n\n')
time.sleep(0.2)
return item
在导出数据这里,进行过多次尝试,每次得到的数据格式都不太满意。因为最初没有算出平均值,而是直接得出所需要的日数据,而每一个原始数据都存在\r\n;所以都会有大量空格存在。处理方法为:将得到的数字数据转化为float格式,城市数据源改变,日期数据删除。但是没有解决这个问难。希望后续可以补上。
这里若是不设置time.sleep。则得到的数据可能会相应减少,因为访问网站的频率过快。
对应的PipeLine,设置相应的Setting,其中的AUTOTHROTTLE_ENABLED 应该去除注释,可以得到更完整的数据:
AUTOTHROTTLE_ENABLED = True
。本项目的缺点在于: