5.urllib-handler处理器,代理

handler处理器基本使用

import urllib.request

url = "http://www.baidu.com"
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36 Edg/94.0.992.50"
}
request = urllib.request.Request(url=url, headers=headers)
# 获取handler对象
# handler = urllib.request.HTTPHandler()
# 获取opener对象
opener = urllib.request.build_opener(handler)
# 调用open方法
response = opener.open(request)
content = response.read().decode("utf8")
print(content)

使用ip代理

爬取数据可能被识别,导致ip被禁止访问

import urllib.request

url = "http://www.baidu.com"
headers = {
    "User-Agent": "..."
}
request = urllib.request.Request(url=url, headers=headers)
proxies = {
    "http": "60.168.206.199:1133"
}
handler = urllib.request.ProxyHandler(proxies=proxies)

# 获取opener对象
opener = urllib.request.build_opener(handler)
# 调用open方法
response = opener.open(request)

content = response.read().decode("utf8")

print(content)

获取免费ip——快代理

5.urllib-handler处理器,代理_第1张图片

代理池

import urllib.request

url = "http://www.baidu.com"
headers = {
    "User-Agent": "..."
}
request = urllib.request.Request(url=url, headers=headers)
# 代理池
proxies_pool = [
    {"175.155.138.110": "1133"},
    {"58.215.201.98": "56566"},
    {"47.100.14.22": "9006"}
]
proxies = random.choice(proxies_pool)
handler = urllib.request.ProxyHandler(proxies=proxies)

# 获取opener对象
opener = urllib.request.build_opener(handler)
# 调用open方法
response = opener.open(request)

content = response.read().decode("utf8")

print(content)

random知识补充

从非空序列中随机选取一个数据并带回,该序列可以是list、tuple、str、set

random.choice()	

从集群中随机选取k次数据,返回一个列表,可以设置权重。注意每次选取都不会影响原序列,每一次选取都是基于原序列。

  • population:集群。
  • weights:相对权重
  • cum_weights:累加权重。
  • k:选取次数
random.choices(population,weights=None,*,cum_weights=None,k=1)

你可能感兴趣的:(#,爬虫,python)