Scrapy是一个爬虫框架,首先先进行安装scrapy,进入cmd,pip install scrapy。
在cmd中输入scrapy startproject +项目名,进行创建一个爬虫项目
按照提示进入myzhifang文件夹,创建爬虫名称
cd myzhifang
scrapy genspider zhifang zhifang.com
这样就创建好了爬虫名
scrapy.cfg :项目的配置文件
spiders文件夹:存储爬虫文件
zhifang.py:爬虫文件
items.py:项目的目标文件,如名字,价格等
pipelines.py:存储爬虫数据的文件
settings.py:项目设置文件
想爬取前10页的标题,价格,位置,图片url,以及整个详情的url
第1页网址:https://m.zhifang.com/project/03ah100000000000000.html
第2页:https://m.zhifang.com/project/03ah100000000000000_2.html
第5页:https://m.zhifang.com/project/03ah100000000000000_5.html
说明就是.html前面的数字在变化
上图找到标题,价格,位置,图片url,以及整个详情的url的位置,爬虫的时候可以使用css和xpath,与requests爬取语法基本相似。
# Scrapy settings for myzhifang project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
import random
BOT_NAME = 'myzhifang'
SPIDER_MODULES = ['myzhifang.spiders']
NEWSPIDER_MODULE = 'myzhifang.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'myzhifang (+http://www.yourdomain.com)'
# Obey robots.txt rules
#关闭机器协议
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# DOWNLOAD_DELAY = 3
DOWNLOAD_DELAY = random.randint(3, 8) # 请求间的延迟设置随机3-10秒
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16
# CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
# COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#设置请求头
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
# SPIDER_MIDDLEWARES = {
# 'myzhifang.middlewares.MyzhifangSpiderMiddleware': 543,
# }
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
# 'myzhifang.middlewares.MyzhifangDownloaderMiddleware': 543,
# }
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
# }
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#配制管道,用于下载数据
ITEM_PIPELINES = {
'myzhifang.pipelines.MyzhifangPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class MyzhifangItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
title = scrapy.Field()
price = scrapy.Field()
address = scrapy.Field()
pic_url = scrapy.Field()
k_url = scrapy.Field()
pass
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
import csv
class MyzhifangPipeline:
def __init__(self):
#打开文件,写入类型,newline='',消除空行
self.file = open('zhifang_hefei.csv','w',encoding='utf-8',newline='')
#切记,标题名要和item中的名字对应,不然会报错
self.columns = ['title', 'price','address','pic_url','k_url']
self.write = csv.DictWriter(self.file, fieldnames=self.columns)
#写入首行
self.write.writeheader()
def process_item(self, item, spider):
self.write.writerow(dict(item))
return item
def close(self,spider):
self.file.close()
import scrapy
from myzhifang.items import MyzhifangItem
class ZhifangSpider(scrapy.Spider):
name = 'zhifang'
allowed_domains = ['m.zhifang.com']
start_urls = ['https://m.zhifang.com/project/03sd100000000000000_1.html']
url = 'https://m.zhifang.com/project/03sd100000000000000_%d.html'
page = 1
def parse(self, response):
item = MyzhifangItem()
lists = response.xpath('//article[@class="article-wrap"]')
for lis in lists:
# 楼盘名称
item['title'] = lis.xpath('./a/h2/text()').get()
# 单价
item['price'] = lis.xpath('./a/div[@class="content"]/p[@class="price"]/strong/text()').get()
# 位置
item['address'] = lis.xpath('./a/div[@class="content"]/p[@class="area"]/text()').get()
# 图片url
item['pic_url'] = 'http:' + lis.xpath('./a/div[@class="content"]/img/@src').get()
# 详情url
item['k_url'] = 'https://m.zhifang.com/' + lis.xpath('./a/@href').get()
# print(k_url)
yield item
#爬取前十页的数据
if self.page < 11:
self.page += 1
new_url = format(self.url % self.page)
yield scrapy.Request(url=new_url, callback=self.parse)
爬取前十页的内容
在终端输入:scrapy crawl zhifang,zhifang是你创建的爬虫名字
程序运行中