博主第一次写博文,第一次学爬虫,就是想分享,大家见怪不怪,
首先我设置了一个自定义UA代理池并没有采用插件pip install fake-useragent形式进行随机获取print(ua.ie)
下面是我修改了第一个错误之后的程序,我第一次写的是
ua={"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:65.0) Gecko/20100101 Firefox/65.0",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:65.0) Gecko/20100101 Firefox/65.0"
}
url = 'http://www.baidu.com/'
headers = ua_info.a
req = request.Request(url=url, headers=headers)
res = urllib.request.urlopen(req)
#html = res.read().decode('utf-8')
print(html)
遇到的第一个问题:
Traceback (most recent call last):
File "C:\Programs\Python\pythonProject\main.py", line 25, in
req = request.Request(url=url, headers=headers)
File "C:\Programs\Python\Python39\lib\urllib\request.py", line 326, in init
for key, value in headers.items():
AttributeError: 'str' object has no attribute 'items'
Process finished with exit code 1
改好第一个问题之后的程序
ua_list = [
'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11',
'User-Agent:Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11',
'Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1',
'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0',
'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1',
]
a = random.choice(ua_list)
print(a)
url = 'http://www.baidu.com/'
rs1 = ua_info.a
headers = {'User-Agent': rs1}
# 1、创建请求对象,包装ua信息
# req = request.Request(url=url, headers=headers)
query_string = {
'wd': '爬虫'
}
result = parse.urlencode(query_string)
url1 = 'http://www.baidu.com/s?{}'.format(result)
req = request.Request(url=url1, headers=headers)
res = urllib.request.urlopen(req)
html = res.read().decode('utf-8')
print(html)
爬个五次吧,出现了下面结果
百度安全验证
查百度解决方案让我在headers中加个参数,并说明找到的位置,并且已经得到了解决,
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36 Edg/83.0.478.50',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9'
}
好奇之下我查了爬虫与反爬的对抗,如下
文章链接:反爬虫策略及破解方法 - 特洛伊-Micro - 博客园反爬虫策略及破解方法 作者出蜘蛛网了 反爬虫策略及破解方法 作者出蜘蛛网了 反爬虫策略及破解方法 作者出蜘蛛网了 反爬虫策略及破解方法爬虫和反爬的对抗一直在进行着…为了帮助更好的进行爬虫行为以及反爬,https://www.cnblogs.com/micro-chen/p/8676312.html
试了 试下面的代码,也是可以的,但是会报警告
headers={'User-Agent':'Baiduspider'}