python爬取前程无忧职位信息

欢迎关注我的微信公众号:AI进阶者,每天分享技术干货

相信很多小伙伴都面临找工作的问题,本人目前正在魔都读研,也面临明年春招找实习秋招找工作等一系列问题,由于本人的专业为机械专业,结合今年的就业状况(车企不招机械毕业生只招计算机专业的学生),一个字——难呐!

python爬取前程无忧职位信息_第1张图片
今天我们用python来爬取前程无忧上的职位信息,为找到好工作做好准备。

  • 第一步:打开我们要分析的网站

  • 第二步:用chrome浏览器对网页进行简单的网页抓包

相信玩过爬虫的都知道这是最基础的步骤,1、鼠标右键选择检查,2、选择Network,3、选中左边抓包后的网页链接
至此我们可以得到抓包后的正常网页链接应该是https://search.51job.com/list/020000,000000,0000,00,9,99,%2B,2,1.html
然后我们输入想要搜索的职位以及翻转页面看看这些信息在url的哪个部分
比如搜索框输入大数据然后回车并且向后翻页,url的变化情况为
https://search.51job.com/list/020000,000000,0000,00,9,99,%25E5%25A4%25A7%25E6%2595%25B0%25E6%258D%25AE,2,1.html

https://search.51job.com/list/020000,000000,0000,00,9,99,%25E5%25A4%25A7%25E6%2595%25B0%25E6%258D%25AE,2,2.html

https://search.51job.com/list/020000,000000,0000,00,9,99,%25E5%25A4%25A7%25E6%2595%25B0%25E6%258D%25AE,2,3.html

可以推断出https://search.51job.com/list/020000,000000,0000,00,9,99,后面的%25E5%25A4%25A7%25E6%2595%25B0%25E6%258D%25AE代表我们搜索的职位信息,而,2,与.html中间的数字则代表页面信息(第几页)
为此我们可以构造出爬取的url信息

 url = 'http://search.51job.com/list/000000,000000,0000,00,9,99,' + key + ',2,' + str(i) + '.html'

并且根据浏览器中的抓包结果构造请求头headers

headers = {'Host': 'search.51job.com',
           'Upgrade-Insecure-Requests': '1',
           'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'}

因为我们要获取每一条职位的详细信息,所以我们提前网页源代码中每一条职位的链接并且进入

点击左上角的箭头,选择其中一个职位,我们就可以找到其在源码中的位置,接下来我们提取其url方便我们提取每一条职位的信息

url = 'http://search.51job.com/list/000000,000000,0000,00,9,99,' + key + ',2,' + str(i) + '.html'
r = requests.get(url, headers, timeout=10)
s = requests.session()
s.keep_alive = False
r.encoding = 'gbk'
reg = re.compile(r'class="t1 ">.*? ', re.S)
links = re.findall(reg, r.text)
return links

不难发现其提取的正则表达式为

class="t1 ">.*? <a target="_blank" title=".*?" href="(.*?)".*? <span class="t2">

然后我们进入单个职位的网页通过分析其源代码提取我们想要的信息

接着我们用lxml来解析网页

job = t1.xpath('//div[@class="tHeader tHjob"]//h1/text()')[0].strip()
companyname = t1.xpath('//p[@class="cname"]/a/text()')[0].strip()
print('工作:', job)
print('公司:', companyname)
area = t1.xpath('//div[@class="tHeader tHjob"]//p[@class="msg ltype"]/text()')[0].strip()
print('地区', area)
workyear = t1.xpath('//div[@class="tHeader tHjob"]//p[@class="msg ltype"]/text()')[1].strip()
print('工作经验', workyear)
education = t1.xpath('//div[@class="tHeader tHjob"]//p[@class="msg ltype"]/text()')[2].strip()
print('学历:', education)
require_people = t1.xpath('//div[@class="tHeader tHjob"]//p[@class="msg ltype"]/text()')[3].strip()
print('人数', require_people)
date = t1.xpath('//div[@class="tHeader tHjob"]//p[@class="msg ltype"]/text()')[4].strip()
print('发布日期', date)
describes = re.findall(re.compile('
(.*?)div class="mt10"', re.S), r1.text) job_describe = describes[0].strip().replace('

', '').replace('

'
, '').replace('

', '').replace('', '').replace('', '').replace('\t', '').replace('<', '').replace('
'
, '').replace('\n', '').replace(' ','') print('职位信息', job_describe) describes1 = re.findall(re.compile('

(.*?)
'
, re.S), r1.text) company_describe= describes1[0].strip().replace('

', '').replace('

'
, '').replace('

', '').replace('','').replace('', '').replace('\t', '').replace(' ','').replace('
'
, '').replace('
'
, '') print('公司信息', company_describe) companytypes = t1.xpath('//div[@class="com_tag"]/p/text()')[0] print('公司类型', companytypes) company_people = t1.xpath('//div[@class="com_tag"]/p/text()')[1] print('公司人数',company_people) salary=t1.xpath('//div[@class="cn"]/h1/strong/text()') salary = re.findall(re.compile(r'div class="cn">.*?(.*?)',re.S),r1.text)[0] print('薪水',salary) labels = t1.xpath('//div[@class="jtag"]/div[@class="t1"]/span/text()') label = '' for i in labels: label = label + ' ' + i print('待遇',label)

接下来就是把这些功能整合起来书写主函数并且保存本地了
看一看运行效果吧

爬取速度还可以,看看主函数部分我们设置了到20页的for循环,其实可以加个多线程。

if __name__ == '__main__':
        for a in range(1,20):
            datasets = pd.DataFrame()
            print('正在爬取第{}页信息'.format(a))
            # time.sleep(random.random()+random.randint(1,5))
            links= get_links(a)
            print(links)
            for link in links:
                #time.sleep(random.random() + random.randint(0, 1))
                print(link)
                state,series=get_content(link)
                if state==1:
                    datasets = datasets.append(series, ignore_index=True)
                    print('datasets---------',datasets)


            print('第{}页信息爬取完成\n'.format(a))
        print(datasets)
        datasets.to_csv('51job_test.csv', index=False, index_label=False,
                            encoding='utf_8_sig',mode='a+')

我们可以看看保存下来的本地文件

那么大数据相关岗位的信息我们就爬取下来了,我们还可以爬取其他职位的信息并保存本地,只需更改key值即可
下面提供完整代码让你们好好上手玩爬虫

import re
import time, random
import requests
from lxml import html
from urllib import parse
import xlwt
import pandas as pd

key = '大数据'
key = parse.quote(parse.quote(key))
headers = {'Host': 'search.51job.com',
           'Upgrade-Insecure-Requests': '1',
           'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'}

def get_links(i):
    url = 'http://search.51job.com/list/000000,000000,0000,00,9,99,' + key + ',2,' + str(i) + '.html'
    r = requests.get(url, headers, timeout=10)
    s = requests.session()
    s.keep_alive = False
    r.encoding = 'gbk'
    reg = re.compile(r'class="t1 ">.*? ', re.S)
    links = re.findall(reg, r.text)
    return links


# 多页处理,下载到文件
def get_content(link):
    r1 = requests.get(link, headers, timeout=10)
    s = requests.session()
    s.keep_alive = False
    r1.encoding = 'gbk'
    t1 = html.fromstring(r1.text)
    try:
        job = t1.xpath('//div[@class="tHeader tHjob"]//h1/text()')[0].strip()
        companyname = t1.xpath('//p[@class="cname"]/a/text()')[0].strip()
        print('工作:', job)
        print('公司:', companyname)
        area = t1.xpath('//div[@class="tHeader tHjob"]//p[@class="msg ltype"]/text()')[0].strip()
        print('地区', area)
        workyear = t1.xpath('//div[@class="tHeader tHjob"]//p[@class="msg ltype"]/text()')[1].strip()
        print('工作经验', workyear)
        education = t1.xpath('//div[@class="tHeader tHjob"]//p[@class="msg ltype"]/text()')[2].strip()
        print('学历:', education)
        require_people = t1.xpath('//div[@class="tHeader tHjob"]//p[@class="msg ltype"]/text()')[3].strip()
        print('人数', require_people)
        date = t1.xpath('//div[@class="tHeader tHjob"]//p[@class="msg ltype"]/text()')[4].strip()
        print('发布日期', date)
        describes = re.findall(re.compile('
(.*?)div class="mt10"', re.S), r1.text) job_describe = describes[0].strip().replace('

', '').replace('

'
, '').replace('

', '').replace('', '').replace('', '').replace('\t', '').replace('<', '').replace('
'
, '').replace('\n', '').replace(' ','') print('职位信息', job_describe) describes1 = re.findall(re.compile('

(.*?)
'
, re.S), r1.text) company_describe= describes1[0].strip().replace('

', '').replace('

'
, '').replace('

', '').replace('','').replace('', '').replace('\t', '').replace(' ','').replace('
'
, '').replace('
'
, '') print('公司信息', company_describe) companytypes = t1.xpath('//div[@class="com_tag"]/p/text()')[0] print('公司类型', companytypes) company_people = t1.xpath('//div[@class="com_tag"]/p/text()')[1] print('公司人数',company_people) salary=t1.xpath('//div[@class="cn"]/h1/strong/text()') salary = re.findall(re.compile(r'div class="cn">.*?(.*?)',re.S),r1.text)[0] print('薪水',salary) labels = t1.xpath('//div[@class="jtag"]/div[@class="t1"]/span/text()') label = '' for i in labels: label = label + ' ' + i print('待遇',label) datalist = [ str(area), str(companyname), str(job), str(education), str(salary), str(label), str(workyear), str(require_people), str(date), str(job_describe), str(company_describe), str(companytypes), str(company_people), str(link)] series = pd.Series(datalist, index=[ '地区', '公司名称', '工作', 'education', 'salary', 'welfare', '工作经验', '需求人数', '发布时间', '工作介绍', '公司介绍', '公司规模', '公司人数', '链接', ]) return (1,series) except IndexError: print('error,未定位到有效信息导致索引越界') series = None return (-1, series) if __name__ == '__main__': for a in range(1,20): datasets = pd.DataFrame() print('正在爬取第{}页信息'.format(a)) # time.sleep(random.random()+random.randint(1,5)) links= get_links(a) print(links) for link in links: #time.sleep(random.random() + random.randint(0, 1)) print(link) state,series=get_content(link) if state==1: datasets = datasets.append(series, ignore_index=True) print('datasets---------',datasets) print('第{}页信息爬取完成\n'.format(a)) print(datasets) datasets.to_csv('51job_test.csv', index=False, index_label=False, encoding='utf_8_sig',mode='a+')

公众号中有详尽流程,欢迎关注一起学习技术!

你可能感兴趣的:(python爬虫实战)