python之scrapy爬取股票信息

如果有错误请指出
爬取的股票数据的位置的确定
python之scrapy爬取股票信息_第1张图片
步骤
python之scrapy爬取股票信息_第2张图片
配置并发链接选项
python之scrapy爬取股票信息_第3张图片
步骤一:
python之scrapy爬取股票信息_第4张图片
步骤二:编写spider文件
打开对应文件

# -*- coding: utf-8 -*-
import scrapy
import re


class StocksSpider(scrapy.Spider):
    name = 'stocks'
    allowed_domains = ['baidu.com']
    start_urls = ['http://baidu.com/']

    def parse(self, response):
        #对页面中所有的a标签进行提取
        for href in response.css('a::attr(href)').extract():
            try:
                stock = re.findall(r"[s][hz]\d{6}",href)[0]
                url = 'https://gupiao.baidu.com/stock/'+stock + '.html'
                yield scrapy.Request(url, callback = self.parse_stock)
            except:
                continue
    #从百度股票的单个页面中提取信息的方法
    def parse_stock(self,response):
        infoDict = {}
        stockInfo = response.css('.stock-bets')
        name = stockInfo.css('.bets-name').extract()[0]
        keyList = stockInfo.css('dt').extract()
        valueList = stockInfo.css('dd').extract()
        for i in range(len(keyList)):
            key = re.findall(r'>.*<',keyList[i])[0][1:-5]
            try:
                val = re.findall(r'\d+\.?.*',valueList[i])[0][1:-5]
            except:
                val = '--'
            infoDict[key] = val
        infoDict.update(
            {'股票名称': re.findall('\s.*\(',name)[0].split()[0] +re.findall('\>.*\<',name)[0][1:-1]})
        yield infoDict

步骤三:编写Piplines
python之scrapy爬取股票信息_第5张图片

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


class BaidustocksPipeline(object):
    def process_item(self, item, spider):
        return item


class BaidustocksInfoPipeline(object):
    # 当一个爬虫被调用时对应的pipelines对应的方法
    def open_spider(self,spider):
        self.f = open('BaiduStockInfo.txt','w')

    # 一个爬虫关闭或结束时对应的pipelines对应的方法
    def close_spider(self,spider):
        self.f.close()

    # 对每个item项进行处理时对应的方法
    def process_item(self,item,spider):
        try:
            line = str(dict(item))+'\n'
            self.f.close(line)
        except:
            pass
        return  item

python之scrapy爬取股票信息_第6张图片
最后在cmd中,使用一下命令执行函数

scrapy crawl stocks

你可能感兴趣的:(python)