python之所以能迅速风靡全国,和大街小巷各种的培训机构脱不开关系。
一会pythonAI未来以来,一会儿4个月培养人工智能与机器学习顶尖人才,更有甚者什么一周成就爬虫分析师...
我这一把年纪了,胆子小只敢在自己的公众号里说说。至于出去了,你们该实力互吹、生猛造势的,我看看就好不说话。
网上经常看到爬虫的文章,什么爬了几十万数据,一把撸下来几千万评论的,听起来高大上又牛逼。
但其实爬虫工程师,你看网上有几个招聘的?为什么,因为数据有价!
各大厂做什么网络解决方案的,怎么解决?不得先把各大运营商数据买回来分析了才去解决吗?天下哪有白吃的午餐。
IPProxys
吗?那我就呵呵了,几个人真的现在用过免费的ip代理池,你去看看现在的免费代理池,有几个是可用的!学习爬虫,可以让你多掌握一门技术,但个人劝你不要在这条路走的太深。没事儿爬点小东西,学习下网络知识,掌握些网页解析技巧就好了。再牛逼的爬虫框架,也解决不了你没数据的苦恼。
扯了一圈了,该回到主题了。
上面说了一堆的爬虫这不好那不好,结果我今天发的文章确是爬虫的,自己打自己的脸?
其实我只是想说说网站数据展示与分析的技巧...恰巧Boss直聘就做的很不错。怎么不错?一点点分析...
大兴安岭
我选择黑龙江省的大兴安岭,去看看那里有招聘python的没,多数系统查询不到数据就会给你提示未获取到相关数据,但Boss直聘会悄悄地吧黑龙江省的python招聘信息给你显示处理,够鸡~贼。
全国数据
这里差一点就把我坑了,我开始天真的以为,全国只有300条(一页30条,共10也)python招聘信息。
然后我回过头去看西安的,也只有10页,然后想着修改下他的get请求parameters,没卵用。
这有啥用?仔细想...一方面可以做到放置咱们爬虫一下获取所有的数据,但这只是你自作多情,这东西是商机!
每天那么多的商家发布招聘信息,进入不了top100,别人想看都看不到你的消息,除非搜索名字。那么如何排名靠前?答案就是最后俩字,靠钱。你是Boss直聘的会员,你发布的就会靠前....
大杂烩
封“神”榜
感觉人生已经到达了高潮,感觉人生已经到达了巅峰
Boss直聘的服务器里,留着我的痕迹,多么骄傲的事情啊。你们想不想和我一样?只需要3秒钟....
三秒钟内你的访问量能超过1000,妥妥被封!
pip install fake-useragent
安装后获取多种User-Agent,但其实本地保存上几十个,完全够了....爬取全国热点城市的职业,然后对各大城市的薪资进行比较。
你想爬什么职业,自己写关键字即可.....
我当然关注的是python了,所以解析到的原始数据如下:
先来看看python的薪酬榜:
python薪酬榜
看一下西安的排位,薪资平均真的好低.....
至于你说薪资范围:什么15-20K?放心90%的人入职都只会给你15K的,那10%的人不是你,不是你。
再来看看ruby的:
Ruby薪酬榜
看这感觉比Python高很多啊....但其实呢?跟百度人均公司3W+一样,你拿人均算?光几个总裁年薪上亿的,就拉上去了....
但还是可以看到一点,西安的薪酬还是好低......
代码其实没有太多讲的,篇幅最多的内容,估计就是我的User-Agent了....
# -*- coding: utf-8 -*-
# @Author : 王翔
# @JianShu : 清风Python
# @Date : 2019/6/14 22:23
# @Software : PyCharm
# @version :Python 3.6.8
# @File : BossCrawler.py
import requests
from bs4 import BeautifulSoup
import csv
import random
import time
import argparse
from pyecharts.charts import Line
import pandas as pd
class BossCrawler:
def __init__(self, query):
self.query = query
self.filename = 'boss_info_%s.csv' % self.query
self.city_code_list = self.get_city()
self.boss_info_list = []
self.csv_header = ["city", "profession", "salary", "company"]
@staticmethod
def getheaders():
user_list = [
"Opera/9.80 (X11; Linux i686; Ubuntu/14.10) Presto/2.12.388 Version/12.16",
"Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14",
"Mozilla/5.0 (Windows NT 6.0; rv:2.0) Gecko/20100101 Firefox/4.0 Opera 12.14",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0) Opera 12.14",
"Opera/12.80 (Windows NT 5.1; U; en) Presto/2.10.289 Version/12.02",
"Opera/9.80 (Windows NT 6.1; U; es-ES) Presto/2.9.181 Version/12.00",
"Opera/9.80 (Windows NT 5.1; U; zh-sg) Presto/2.9.181 Version/12.00",
"Opera/12.0(Windows NT 5.2;U;en)Presto/22.9.168 Version/12.00",
"Opera/12.0(Windows NT 5.1;U;en)Presto/22.9.168 Version/12.00",
"Mozilla/5.0 (Windows NT 5.1) Gecko/20100101 Firefox/14.0 Opera/12.0",
"Opera/9.80 (Windows NT 6.1; WOW64; U; pt) Presto/2.10.229 Version/11.62",
"Opera/9.80 (Windows NT 6.0; U; pl) Presto/2.10.229 Version/11.62",
"Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
"Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; de) Presto/2.9.168 Version/11.52",
"Opera/9.80 (Windows NT 5.1; U; en) Presto/2.9.168 Version/11.51",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; de) Opera 11.51",
"Opera/9.80 (X11; Linux x86_64; U; fr) Presto/2.9.168 Version/11.50",
"Opera/9.80 (X11; Linux i686; U; hu) Presto/2.9.168 Version/11.50",
"Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11",
"Opera/9.80 (X11; Linux i686; U; es-ES) Presto/2.8.131 Version/11.11",
"Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/5.0 Opera 11.11",
"Opera/9.80 (X11; Linux x86_64; U; bg) Presto/2.8.131 Version/11.10",
"Opera/9.80 (Windows NT 6.0; U; en) Presto/2.8.99 Version/11.10",
"Opera/9.80 (Windows NT 5.1; U; zh-tw) Presto/2.8.131 Version/11.10",
"Opera/9.80 (Windows NT 6.1; Opera Tablet/15165; U; en) Presto/2.8.149 Version/11.1",
"Opera/9.80 (X11; Linux x86_64; U; Ubuntu/10.10 (maverick); pl) Presto/2.7.62 Version/11.01",
"Opera/9.80 (X11; Linux i686; U; ja) Presto/2.7.62 Version/11.01",
"Opera/9.80 (X11; Linux i686; U; fr) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.1; U; zh-tw) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.1; U; zh-cn) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.1; U; sv) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.1; U; en-US) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.1; U; cs) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.0; U; pl) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 5.1; U;) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 5.1; U; cs) Presto/2.7.62 Version/11.01",
"Mozilla/5.0 (Windows NT 6.1; U; nl; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6 Opera 11.01",
"Mozilla/5.0 (Windows NT 6.1; U; de; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6 Opera 11.01",
"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; de) Opera 11.01",
"Opera/9.80 (X11; Linux x86_64; U; pl) Presto/2.7.62 Version/11.00",
"Opera/9.80 (X11; Linux i686; U; it) Presto/2.7.62 Version/11.00",
"Opera/9.80 (Windows NT 6.1; U; zh-cn) Presto/2.6.37 Version/11.00",
"Opera/9.80 (Windows NT 6.1; U; pl) Presto/2.7.62 Version/11.00",
"Opera/9.80 (Windows NT 6.1; U; ko) Presto/2.7.62 Version/11.00",
"Opera/9.80 (Windows NT 6.1; U; fi) Presto/2.7.62 Version/11.00",
"Opera/9.80 (Windows NT 6.1; U; en-GB) Presto/2.7.62 Version/11.00",
"Opera/9.80 (Windows NT 6.1 x64; U; en) Presto/2.7.62 Version/11.00",
"Opera/9.80 (Windows NT 6.0; U; en) Presto/2.7.39 Version/11.00"
]
user_agent = random.choice(user_list)
headers = {'User-Agent': user_agent}
return headers
def get_city(self):
headers = self.getheaders()
r = requests.get("http://www.zhipin.com/wapi/zpCommon/data/city.json", headers=headers)
data = r.json()
return [city['code'] for city in data['zpData']['hotCityList'][1:]]
def get_response(self, url, params=None):
headers = self.getheaders()
r = requests.get(url, headers=headers, params=params)
r.encoding = 'utf-8'
soup = BeautifulSoup(r.text, "lxml")
return soup
def get_url(self):
for city_code in self.city_code_list:
url = "https://www.zhipin.com/c%s/" % city_code
self.per_page_info(url)
time.sleep(10)
def per_page_info(self, url):
for page_num in range(1, 11):
params = {"query": self.query, "page": page_num}
soup = self.get_response(url, params)
lines = soup.find('div', class_='job-list').select('ul > li')
if not lines:
# 代表没有数据了,换下一个城市
return
for line in lines:
info_primary = line.find('div', class_="info-primary")
city = info_primary.find('p').text.split(' ')[0]
job = info_primary.find('div', class_="job-title").text
# 过滤答非所谓的招聘信息
if self.query.lower() not in job.lower():
continue
salary = info_primary.find('span', class_="red").text.split('-')[0].replace('K', '')
company = line.find('div', class_="info-company").find('a').text.lower()
result = dict(zip(self.csv_header, [city, job, salary, company]))
print(result)
self.boss_info_list.append(result)
def write_result(self):
with open(self.filename, "w+", encoding='utf-8', newline='') as f:
f_csv = csv.DictWriter(f, self.csv_header)
f_csv.writeheader()
f_csv.writerows(self.boss_info_list)
def read_csv(self):
data = pd.read_csv(self.filename, sep=",", header=0)
data.groupby('city').mean()['salary'].to_frame('salary').reset_index().sort_values('salary', ascending=False)
result = data.groupby('city').apply(lambda x: x.mean()).round(1)['salary'].to_frame(
'salary').reset_index().sort_values('salary', ascending=False)
print(result)
charts_bar = (
Line()
.set_global_opts(
title_opts={"text": "全国%s薪酬榜" % self.query})
.add_xaxis(result.city.values.tolist())
.add_yaxis("salary", result.salary.values.tolist())
)
charts_bar.render('%s.html' % self.query)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-k", "--keyword", help="请填写所需查询的关键字")
args = parser.parse_args()
if not args.keyword:
print(parser.print_help())
else:
main = BossCrawler(args.keyword)
main.get_url()
main.write_result()
main.read_csv()
好了,今天的内容就到这里,如果觉得有帮助,记得点赞支持。欢迎大家关注笔者的公众号【清风Python】。
来源:清风Python