爬虫前期准备
01 爬虫就是模拟浏览器抓取东西,爬虫三部曲:数据爬取、数据解析、数据存储
数据爬取:手机端、pc端数据解析:正则表达式数据存储:存储到文件、存储到数据库
02. 相关python库
爬虫需要两个库模块:requests和re
1. requests库
requests是比较简单易用的HTTP库,相较于urllib会简洁很多,但由于是第三方库,所以需要安装,文末附上安装教程链接(链接全在后面,这样会比较方便看吧,贴心吧~)
requests库支持的HTTP特性:
保持活动和连接池、Cookie持久性会话、分段文件上传、分块请求等
Requests库中有许多方法,所有方法在底层的调用都是通过request()方法实现的,所以严格来说Requests库只有request()方法,但一般不会直接使用request()方法。以下介绍Requests库的7个主要的方法:
①requests.request()
构造一个请求,支撑一下请求的方法
具体形式:requests.request(method,url,**kwargs)
method:请求方式,对应get,post,put等方式
url:拟获取页面的url连接
**kwargs:控制访问参数
②requests.get()
获取网页HTM网页的主要方法,对应HTTP的GET。构造一个向服务器请求资源的Requests对象,返回一个包含服务器资源的Response对象。
Response对象的属性:
属性说明r.status_codeHTTP请求的返回状态(连接成功返回200;连接失败返回404)r.textHTTP响应内容的字符串形式,即:url对应的页面内容r.encoding从HTTP header中猜测的响应内容编码方式r.apparent_encoding从内容中分析出的响应内容编码方式(备选编码方式)r.contentHTTP响应内容的二进制形式
具体形式:res=requests.get(url)
code=res.text (text为文本形式;bin为二进制;json为json解析)
③requests.head()
获取HTML的网页头部信息,对应HTTP的HEAD
具体形式:res=requests.head(url)
④requests.post()
向网页提交post请求方法,对应HTTP的POST
具体形式:res=requests.post(url)
⑤requests.put()
向网页提交put请求方法,对应HTTP的PUT
⑥requests.patch()
向网页提交局部修改的请求,对应HTTP的PATCH
⑦requests.delete()
向网页提交删除的请求,对应HTTP的DELETE
"""requests 操作练习"""
import requests
import re
#数据的爬取
h = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36'
}
response = requests.get('https://movie.douban.com/chart',headers=h)
html_str = response.text
pattern = re.compile('') # .*? 任意匹配尽可能多的匹配尽可能少的字符
result = re.findall(pattern,html_str)
print(result)
2. re正则表达式:(Regular Expression)
一组由字母和符号组成的特殊字符串,作用:从文本中找到你想要的格式的句子
关于 .*? 的解释:
* 匹配前面的子表达式零次或多次。例如,zo能匹配“z”以及“zoo”。等价于{0,}。
? 匹配模式是非贪婪的。非贪婪模式尽可能少的匹配所搜索的字符串。例如,对于字符串“oooo”,“o+?”将匹配单个“o”,而“o+”将匹配所有“o”。
. 匹配除“\n”之外的任何单个字符。要匹配包括“\n”在内的任何字符,请使用像“(.
.* 具有贪婪的性质,首先匹配到不能匹配为止,根据后面的正则表达式,会进行回溯。
.*?则相反,一个匹配以后,就往下进行,所以不会进行回溯,具有最小匹配的性质(尽可能匹配少的字符但是要匹配出所有的字符)。
(.*) 是贪婪匹配代表尽可能多的匹配字符因此它将h和l之间所有的字符都匹配了出来
03. xpath解析源码
import requests
import re
from bs4 import BeautifulSoup
from lxml import etree
#数据爬取(一些HTTP头的信息)
h = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36'
}
response = requests.get('https://movie.XX.com/chart',headers=h)
html_str = response.text
#数据解析
#正则表达式解析
def re_parse(html_str):
result = re.findall(pattern,html)
# 方法一:
# str = re.sub('\n',',',result[0])
# print(str)
#方法二:
print(result[0].replace('/n',','))
爬取电影信息
"""爬取*眼电影前100电影信息"""
import requests
import re
import time
# count = [0,10,20,30,40,50,60,70,80,90]
h = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36'
}
responce = requests.get('https://XX.com/board/4?offset=0', headers=h)
responce.encoding = 'utf-8'
html = responce.text
# 解析数据 time.sleep(2)
patter = re.compile('class="name">.*?title="(.*?)".*?主演:(.*?)
.*?上映时间:(.*?)', re.S)#time.sleep(2)
result = re.findall(patter, html)
print(result)
with open('maoyan.txt', 'a', encoding='utf-8') as f:
for item in result: # 读取result(以元组的形式储存)中的内容=》
for i in item:
f.write(i.strip().replace('\n', ','))
#print('\n')
爬取图片
"""*精灵爬取练习 http://616pic.com/png/ ==》 http://XX.616pic.com/ys_img/00/06/20/64dXxVfv6k.jpg"""
import requests
import re
import time
#数据的爬取img的url
def get_urls():
response = requests.get('http://XX.com/png/')
html_str = response.text
#解析数据,得到url
pattern = re.compile('
results = re.findall(pattern,html_str)
print(results)
return results
#
#下载图片
def down_load_img(urls):
for url in urls:
response = requests.get(url)
with open('temp/'+url.split('/')[-1], 'wb') as f:
f.write(response.content)
print(url.split('/')[-1],'已经下载成功')
if __name__ == '__main__':
urls = get_urls()
爬取小仙女
'''头条美女爬取====方法一'''import requests
import re
url = 'https://www.XX.com/api/search/content/?aid=24&app_name=web_search&offset=0&format=json&keyword=%E7%BE%8E%E5%A5%B3&autoload=true&count=20&en_qc=1&cur_tab=1&from=search_tab&pd=synthesis×tamp=1596180364628&_signature=-Bv0rgAgEBA-TE0juRclmfgatbAAKdC7s6ktYqc7u9jLqXOQ5SBCDkd25scxRvDydd6TgtOw0B7RVuaQxhwY1BwV89sPbdam8LkNuV08d0QfrZqQ4oOOrOukEJ1qxroigLT'
response = requests.get(url)
print(response.status_code)
html_str = response.text
#解析"large_image_url":"(.*?)"
pattern = re.compile('"large_image_url":"(.*?)"')
urls = re.findall(pattern,html_str)
print(urls)def down_load(urls):
for url in urls:
response = requests.get(url)
with open('pic/'+url.split('/')[-1],'wb') as f:
f.write(response.content)
print(url.split('/')[-1],'已经下载成功')
if __name__ == '__main__':
down_load(urls)
'''头条美女爬取====方法二'''import requests
import re
from urllib.parse import urlencode
#https://www.XX.com/api/search/content/?aid=24&app_name=web_search&offset=0&format=json&keyword=%E7%BE%8E%E5%A5%B3&autoload=true&count=20def get_urls(page):
keys = {
'aid':'24',
'app_name':'web_search',
'offset':20*page,
'keyword':'美女',
'count':'20'
}
keys_word = urlencode(keys)
url = 'https://www.XX.com/api/search/content/?'+keys_word
response = requests.get(url)
print(response.status_code)
html_str = response.text
# 解析"large_image_url":"(.*?)"
pattern = re.compile('"large_image_url":"(.*?)"',re.S)
urls = re.findall(pattern, html_str)
return urls#下载图片
def download_imags(urls):
for url in urls:
response = requests.get(url)
with open('pic/'+url.split('/')[-1]+'.jpg','wb') as f:
f.write(response.content)
print(url.split('/')[-1]+'.jpg',"已下载~~")if __name__ == '__main__':
for page in range(3):
urls = get_urls(page)
print(urls)
download_imags(urls)
5 线程池
线程池是一种多线程处理形式,处理过程中将任务添加到队列,然后在创建线程后自动启动这些任务。线程池线程都是后台线程。每个线程都使用默认的堆栈大小,以默认的优先级运行,并处于多线程单元中。
"""线程池"""from concurrent.futures import ThreadPoolExecutor
import time
import threadingdef ban_zhuang(i):
print(threading.current_thread().name,"**开始搬砖{}**".format(i))
time.sleep(2)
print("**员工{}搬砖完成**一共搬砖:{}".format(i,12**2)) #将format里的内容输出到{}if __name__ == '__main__': #主线程
start_time = time.time()
print(threading.current_thread().name,"开始搬砖")
with ThreadPoolExecutor(max_workers=5) as pool:
for i in range(10):
p = pool.submit(ban_zhuang,i)
end_time =time.time()
print("一共搬砖{}秒".format(end_time-start_time))
结合多线程的爬虫:
'''头条美女爬取'''import requests
import re
from urllib.parse import urlencode
import timeimport threading
#https://www.XX.com/api/search/content/?aid=24&app_name=web_search&offset=0&format=json&keyword=%E7%BE%8E%E5%A5%B3&autoload=true&count=20def get_urls(page):
keys = {
'aid':'24',
'app_name':'web_search',
'offset':20*page,
'keyword':'美女',
'count':'20'
}
keys_word = urlencode(keys)
url = 'https://www.XX.com/api/search/content/?'+keys_word
response = requests.get(url)
print(response.status_code)
html_str = response.text
# 解析"large_image_url":"(.*?)"
pattern = re.compile('"large_image_url":"(.*?)"',re.S)
urls = re.findall(pattern, html_str)
return urls#下载图片
def download_imags(urls):
for url in urls:
try:
response = requests.get(url)
with open('pic/'+url.split('/')[-1]+'.jpg','wb') as f:
f.write(response.content)
print(url.split('/')[-1]+'.jpg',"已下载~~")
except Exception as err:
print('An exception happened: ')
if __name__ == '__main__':
start = time.time()
thread = []
for page in range(3):
urls = get_urls(page)
#print(urls)
#多线程
for url in urls:
th = threading.Thread(target=download_imags,args=(url,))
#download_imags(urls)
thread.append(th)
for t in thread:
t.start()
for t in thread:
t.join()end = time.time()
print('耗时:',end-start)
6 tips--爬虫协议
Robots协议,又称作爬虫协议,机器人协议,全名叫做网络爬虫排除标准(Robots Exclusion Protocol),是用来告诉爬虫和搜索引擎哪些页面可以抓取,哪些不可以抓取,通常为一个robots.txt文本文件,一般放在网站的根目录下。
Robots协议:在网页的根目录+/robots.txt 如www.baidu.com/robots.txt
User-agent: BaiduspiderDisallow: /baiduDisallow: /s?Disallow: /ulink?Disallow: /link?Disallow: /home/news/data/Disallow: /bhUser-agent: GooglebotDisallow: /baiduDisallow: /s?Disallow: /shifen/Disallow: /homepage/Disallow: /cproDisallow: /ulink?Disallow: /link?Disallow: /home/news/data/Disallow: /bh
tips:要遵守爬虫协议哟,呐。。只能用于爬着玩儿哈~~~记得挂代理~~~(文中的链接我都改过啦,想练手地私聊我,或者自己找链接吧。。。挺好玩儿的啦)
7 相关链接
requests的安装与使用 https://www.jianshu.com/p/140012f88f8e
re的使用说明 https://www.cnblogs.com/vmask/p/6361858.html
其他的爬虫相关文章 https://blog.csdn.net/qq_27297393/article/details/81630774
爬虫的视频 https://www.imooc.com/learn/563