目标: 获取上交所和深交所所有股票的名称和交易信息。
输出: 保存到文件中。
技术路线:Scrapy爬虫框架
语言: python3.5
Scrapy框架如下图所示:
我们主要进行两步操作:
(1) 首先需要在框架中编写一个爬虫程序spider,用于链接爬取和页面解析;
(2) 编写pipelines,用于处理解析后的股票数据并将这些数据存储到文件中。
步骤:
(1) 建立一个工程生成Spider模板
打开cmd
命令行,定位到项目所放的路径,输入:scrapy startproject BaiduStocks
,此时会在目录中新建一个名字为BaiduStocks
的工程。再输入:cd BaiduStocks
进入目录,接着输入:scrapy genspider stocks baidu.com
生成一个爬虫。之后我们可以在spiders/
目录下看到一个stocks.py文件
(2) 编写Spider:配置stocks.py文件,修改返回页面的处理,修改对新增URL爬取请求的处理
打开stocks.py文件,代码如下所示:
# -*- coding: utf-8 -*-
import scrapy
class StocksSpider(scrapy.Spider):
name = 'stocks'
allowed_domains = ['baidu.com']
start_urls = ['http://baidu.com/']
def parse(self, response):
pass
将上述代码修改如下:
# -*- coding: utf-8 -*-
import scrapy
import re
'''
遇到不懂的问题?Python学习交流群:1004391443满足你的需求,资料都已经上传群文件,可以自行下载!
'''
class StocksSpider(scrapy.Spider):
name = "stocks"
start_urls = ['http://quote.eastmoney.com/stocklist.html']
def parse(self, response):
for href in response.css('a::attr(href)').extract():
try:
stock = re.findall(r"[s][hz]\d{6}", href)[0]
url = 'https://gupiao.baidu.com/stock/' + stock + '.html'
yield scrapy.Request(url, callback=self.parse_stock)
except:
continue
def parse_stock(self, response):
infoDict = {}
stockInfo = response.css('.stock-bets')
name = stockInfo.css('.bets-name').extract()[0]
keyList = stockInfo.css('dt').extract()
valueList = stockInfo.css('dd').extract()
for i in range(len(keyList)):
key = re.findall(r'>.*', keyList[i])[0][1:-5]
try:
val = re.findall(r'\d+\.?.*', valueList[i])[0][0:-5]
except:
val = '--'
infoDict[key]=val
infoDict.update(
{'股票名称': re.findall('\s.*\(',name)[0].split()[0] + \
re.findall('\>.*\<', name)[0][1:-1]})
yield infoDict
(3) 配置pipelines.py文件,定义爬取项(Scraped Item)的处理类
打开pipelinse.py文件,如下图所示:
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
class BaidustocksPipeline(object):
def process_item(self, item, spider):
return item
对上述代码修改如下:
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
class BaidustocksPipeline(object):
def process_item(self, item, spider):
return item
#每个pipelines类中有三个方法
class BaidustocksInfoPipeline(object):
#当一个爬虫被调用时,对应的pipelines启动的方法
def open_spider(self, spider):
self.f = open('BaiduStockInfo.txt', 'w')
#一个爬虫关闭或结束时的pipelines对应的方法
def close_spider(self, spider):
self.f.close()
#对每一个Item项进行处理时所对应的方法,也是pipelines中最主体的函数
def process_item(self, item, spider):
try:
line = str(dict(item)) + '\n'
self.f.write(line)
except:
pass
return item
(4) 修改settings.py
,是框架找到我们在pipelinse.py
中写的类
在settings.py
中加入:
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'BaiduStocks.pipelines.BaidustocksInfoPipeline': 300,
}
到这里,程序就完成了。
(4) 执行程序
在命令行中输入:scrapy crawl stocks