安装scrapy
在终端/cmd输入
pip install scrapy
创建项目
**IDE推荐使用pycharm
在cmd/终端输入 (zhaopin为项目的名称)
scrapy startproject zhaopin
(zhaopin为项目的名称)
接着进入zhaopin文件夹目录
cd zhaopin
scrapy genspider -t crawl zhaopin www.zhaopin.com
-t crawl是使用crawl模板, scrapy默认提供了4种模板,我们要进行全站爬虫,使用crawl模板是最合适的,zhaopin是爬虫名称,后面是的我们要爬虫的网站网址域名(这个一定要写对,否则可能被rule过滤掉)
我们的爬虫一般放在spiders目录下。其他文件用处暂时不管,后续我会说明的
zhaopin
│ scrapy.cfg
│
└─── zhaopin
│ items.py
│ middlewares.py
│ pipelines.py
│ settings.py
└─── spiders
│ zhaopin.py
确认需求
根据我们的爬虫目标,我们主要获取这些信息
职业页面: 职位的url,职位标题,工资,地区,学历,招聘人数
公司页面: 公司的url,公司名称,规模,行业,在招岗位数量,邀面试数
查看系统给我们创建的zhaopin.py文件
class ZhaopinSpider(CrawlSpider):
name = 'zhaopin' (这个就是我们的爬虫名称)
allowed_domains = ['www.zhaopin.com']
start_urls = ['https://www.zhaopin.com/']
rules = (
Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
)
def parse_item(self, response):
item = {}
#item['domain_id'] = response.xpath('//input[@id="sid"]/@value').get()
#item['name'] = response.xpath('//div[@id="name"]').get()
#item['description'] = response.xpath('//div[@id="description"]').get()
return item
在start_urls中的http添加s,改为https
rules是scrapy的网址过滤规则,我们可以根据自己需要,自行设置
添加headers
由于智联对user-agent进行了验证,我们需要添加headers
在ZhaopinSpider类中新建start_requests方法,scrapy会默认优先读取start_requests方法中的内容,我们可以在此处添加相关逻辑
def start_requests(self):
headers = {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'}
for url in self.start_urls:
yield scrapy.Request(url=url, headers=headers)
运行scrapy
cd进入spiders文件夹,然后在terminal中运行
scrapy crawl zhilian(爬虫名称)
当看到如下相关信息时,就代表scrapy项目已经可以顺利运行了
2019-04-08 21:52:56 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: zhaopin)
2019-04-08 21:52:56 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 18.7.0, Python 3.7.2 (default, Dec 29 2018, 00:00:04) - [Clang 4.0.1 (tags/RELEASE_401/final)], pyOpenSSL 18.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryptography 2.6.1, Platform Darwin-18.0.0-x86_64-i386-64bit
2019-04-08 21:52:56 [scrapy.crawler] INFO: Overridden settings: {'AUTOTHROTTLE_START_DELAY': 10, 'BOT_NAME': 'zhaopin', 'NEWSPIDER_MODULE': 'zhaopin.spiders', 'SPIDER_MODULES': ['zhaopin.spiders']}
2019-04-08 21:52:56 [scrapy.extensions.telnet] INFO: Telnet Password: f0697536fbe12974
2019-04-08 21:52:56 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2019-04-08 21:52:56 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-04-08 21:52:56 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-04-08 21:52:56 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-04-08 21:52:56 [scrapy.core.engine] INFO: Spider opened
2019-04-08 21:52:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-04-08 21:52:56 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
``