python3 [爬虫入门实战]爬虫之scrapy爬取织梦者网站并存mongoDB

主要爬取了编程栏目里的其他编程里的36638条数据

过程是自己一步一步的往下写的,有不懂的也是一边找笔记,一边百度,一边调试。


遗憾:没有进行多栏目数据的爬取,只爬了一个栏目的数据,希望有想法的有钻研精神的可以自己去尝试爬取一下,难度应该不会很大。

给一张效果图:
这里写图片描述

爬取字段:标题,标题链接,标题描述,发布时间,发布类型,发布tag

爬取方式:主要是获取div【pull-left ltxt w658】下的内容,这个div还是有点复杂的?对于我而言吧。调试了多次,
这里写图片描述

需要爬取的内容都在上面图片标记着了,

先上items里面的代码:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class MakedreamItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()

    # 文章标题
    articleTitle = scrapy.Field()
    # 文章标题url
    articleUrl = scrapy.Field()
    # 文章描述
    articleDesc = scrapy.Field()
    # 文章发布时间
    articlePublic = scrapy.Field()
    # 文章类型
    articleType = scrapy.Field()
    # 文章标签
    articleTag = scrapy.Field()
    # pass

没毛病,我们继续接着上spider里面的代码,瞧仔细了。

# encoding=utf8
import scrapy
from makedream.items import MakedreamItem


class DramingNet(scrapy.Spider):
    # 启动爬虫的名称
    name = 'draming'
    # 爬虫的域范围
    allowed_domains = ['zhimengzhe.com']
    # 爬虫的第一个url
    start_urls = ['http://www.zhimengzhe.com/bianchengjiaocheng/qitabiancheng/index_{}.html'.format(n) for n in
                  range(0, 1466)]

    # 爬取结果解析
    def parse(self, response):
        base_url = 'http://www.zhimengzhe.com'
        # print(response.body)
        node_list = response.xpath("//ul[@class='list-unstyled list-article']/li")
        for node in node_list:
            item = MakedreamItem()
            nextNode = node.xpath("./div[@class='pull-left ltxt w658']")
            print('*' * 30)
            title = nextNode.xpath('./h3/a/text()').extract()
            link = nextNode.xpath('./h3/a/@href').extract()
            desc = nextNode.xpath('./p/text()').extract()

            # 创建时间,类型,标签
            publicTime = nextNode.xpath("./div[@class='tagtime']/span[1]/text()").extract()
            publicType = nextNode.xpath("./div[@class='tagtime']/span[2]/a/text()").extract()
            publicTag = nextNode.xpath("./div[@class='tagtime']/span[3]/a/text()").extract()
            # node
            titleLink = base_url + ''.join(link)
            item['articleTitle'] = title
            # 文章标题url
            item['articleUrl'] = titleLink
            # 文章描述
            item['articleDesc'] = desc
            # 文章发布时间
            item['articlePublic'] = publicTime
            # 文章类型
            item['articleType'] = publicType
            # 文章标签
            item['articleTag'] = publicTag
            yield item

虽然我这次能成功爬取出来字段,不代表以后也一直可以通用,大家要学会灵活一下, 难道不是嘛。/kb

下载文件里面的我分别写了份存入mongodb 和 存入json文件里面的, mongodb的配置不是很难哦。

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import json
import pymongo
from scrapy.conf import settings

class MakedreamPipeline(object):
    def process_item(self, item, spider):
        return item


class DreamMongo(object):
    def __init__(self):
        self.client = pymongo.MongoClient(host=settings['MONGO_HOST'], port=settings['MONGO_PORT'])
        self.db = self.client[settings['MONGO_DB']]
        self.post = self.db[settings['MONGO_COLL']]

    def process_item(self, item, spider):
        postItem = dict(item)
        self.post.insert(postItem)
        return item


# 写入json文件类
class JsonWritePipeline(object):
    def __init__(self):
        self.file = open('织梦网其他编程.json', 'w', encoding='utf-8')

    def process_item(self, item, spider):
        line = json.dumps(dict(item), ensure_ascii=False) + "\n"
        self.file.write(line)
        return item

    def spider_closed(self, spider):
        self.file.close()

注意这里的settings的导入。

接下来我们看看settings里面的东西:
主要是mongodb的东东吧,

# -*- coding: utf-8 -*-

# Scrapy settings for makedream project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'makedream'

SPIDER_MODULES = ['makedream.spiders']
NEWSPIDER_MODULE = 'makedream.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'makedream (+http://www.yourdomain.com)'
# 配置mongoDB
MONGO_HOST = "127.0.0.1"  # 主机IP
MONGO_PORT = 27017  # 端口号
MONGO_DB = "DreamDB"  # 库名
MONGO_COLL = "Dream_info"  # collection
# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'makedream.middlewares.MakedreamSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'makedream.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   # 'makedream.pipelines.MakedreamPipeline': 300,
    'makedream.pipelines.JsonWritePipeline':300,
    'makedream.pipelines.DreamMongo':300
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

接下来就是快到大结局了:

在当前项目下输入:scrapy crawl draming(爬虫名称) 就可以了。然后你就可以去看数据库,或者看到一个json文件的生成了。

给大伙瞧一下数据库。

## 标题 ##

接下来是:根据别人的教程深入学习一下scrap进行全站的抓取吧,感觉这个难度还是要有的,(包括一些cookie,代理ip,user-agent的学习)

你可能感兴趣的:(#,python3爬虫,我的python3爬虫之路,mongodb,python,爬虫,织梦,发布)