股票信息爬取scrapy版

scrapy的粗略实现

 在后续学习中才发现这个案例是非常不严谨和粗浅的,但是作为初步入门时的scrapy实现学习实例还是非常好的。特别是让我对scrapy的具体框架有了个大概的了解。
 本实例用到的url如下

url = "http://quote.eastmoney.com/stocklist.html"
url = "https://gupiao.baidu.com/stock/"

步骤:

  1. 步骤1:建立工程和Spider模板
  2. 步骤2:编写Spider
  3. 步骤3:编写ITEM Pipelines
步骤1

 本步骤就是无脑建立工程

scrapy startproject BaiduStocks
cd BaiduStocks
scrapy genspider stocks baidu.com
步骤2
  • 配置stocks.py文件
  • 修改对返回页面的处理
  • 修改对新增url爬取的处理
# -*- coding: utf-8 -*-
import scrapy
import re


class StocksSpider(scrapy.Spider):
    name = "stocks"
    start_urls = ['http://quote.eastmoney.com/stocklist.html']

    def parse(self, response):
        for href in response.css('a::attr(href)').extract():
            try:
                stock = re.findall(r"[s][hz]\d{6}", href)[0]
                url = 'https://gupiao.baidu.com/stock/' + stock + '.html'
                yield scrapy.Request(url, callback=self.parse_stock)
            except:
                continue

    def parse_stock(self, response):
        infoDict = {}
        stockInfo = response.css('.stock-bets')
        name = stockInfo.css('.bets-name').extract()[0]
        keyList = stockInfo.css('dt').extract()
        valueList = stockInfo.css('dd').extract()
        for i in range(len(keyList)):
            key = re.findall(r'>.*', keyList[i])[0][1:-5]
            try:
                val = re.findall(r'\d+\.?.*', valueList[i])[0][0:-5]
            except:
                val = '--'
            infoDict[key] = val

        infoDict.update(
            {'股票名称': re.findall('\s.*\(', name)[0].split()[0] + \
                     re.findall('\>.*\<', name)[0][1:-1]})
        yield infoDict
步骤3:编写Piplines
  • 配置pipelines.py文件
  • 定义对爬取项(Scraped Item)的处理类
#Pipeline.py
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


class BaidustocksPipeline(object):
    def process_item(self, item, spider):
        return item


class BaidustocksInfoPipeline(object):
    def open_spider(self, spider):
        self.f = open('BaiduStockInfo.txt', 'w')

    def close_spider(self, spider):
        self.f.close()

    def process_item(self, item, spider):
        try:
            line = str(dict(item)) + '\n'
            self.f.write(line)
        except:
            pass
        return item
#setting.py
ITEM_PIPELINES = {
    'BaiduStocks.pipelines.BaidustocksInfoPipeline': 300,
}
最后启动爬虫
scrapy crawl stocks
配置并发连接选项

setting.py文件
CONCURRENT_REQUESTS:Downloader最大并发请求下载数量,默认32
CONCURRENT_ITEMS:Item Pipeline最大并发ITEM处理数量默认100
CONCURRENT_REQUESTS_PER_DOMAIN每个目标域名最大的并发请求数量,默认8
CONCURRENT_REQUESTS_PER_IP每个目标IP最大的并发请求数量,默认0,非0有效

你可能感兴趣的:(python爬虫)