目录
一、fake_useragent的安装
二、Python3中fake_useragent的使用
输出ie,firefox,chrome,随机浏览器版本,对应的useragent;
爬虫程序中的具体使用:随机请求头ua.random
三、应用于scrapy爬虫项目
在middlewares.py中自定义随机请求头的类
from fake_useragent import UserAgent
ua = UserAgent()
print(f'ie浏览器任意版本:{ua.ie}') # 随机打印ie浏览器任意版本
print(f'ie浏览器任意版本:{ua.firefox}') # 随机打印firefox浏览器任意版本
print(f'ie浏览器任意版本:{ua.chrome}') # 随机打印chrome浏览器任意版本
print(f'ie浏览器任意版本:{ua.random}') # 随机打印任意厂家的浏览器
输出结果:
ie浏览器任意版本:Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0
ie浏览器任意版本:Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:16.0.1) Gecko/20121011 Firefox/21.0.1
ie浏览器任意版本:Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.15 (KHTML, like Gecko) Chrome/24.0.1295.0 Safari/537.15
ie浏览器任意版本:Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2224.3 Safari/537.36
# 爬虫程序中的具体使用
from fake_useragent import UserAgent
import requests
ua = UserAgent()
headers = {"User-Agent": ua.random}
get_url = "https://wwww.baidu.com"
response = requests.get(get_url, headers=headers)
print(response.text)
根据scrapy源码中: scrapy目录--->downloadermiddlewares--->useragent.py 中的 UserAgentMiddleware类来写middlewares.py随机请求头的类
源码中useragent.py
"""Set User-Agent header per spider or use a default value from settings"""
from scrapy import signals
class UserAgentMiddleware(object):
"""This middleware allows spiders to override the user_agent"""
def __init__(self, user_agent='Scrapy'):
self.user_agent = user_agent
@classmethod
def from_crawler(cls, crawler):
o = cls(crawler.settings['USER_AGENT'])
crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
return o
def spider_opened(self, spider):
self.user_agent = getattr(spider, 'user_agent', self.user_agent)
def process_request(self, request, spider):
if self.user_agent:
request.headers.setdefault(b'User-Agent', self.user_agent)
middlewares.py定义随机请求头的类
class RandomUserAgentMiddlware(object):
'''随机更换user-agent,基本上都是固定格式和scrapy源码中useragetn.py中UserAgentMiddleware类中一致'''
def __init__(self,crawler):
super(RandomUserAgentMiddlware,self).__init__()
self.ua = UserAgent() #从配置文件settings中读取RANDOM_UA_TYPE值,默认为random,可以在settings中自定义
self.ua_type = crawler.settings.get("RANDOM_UA_TYPE","random")
@classmethod
def from_crawler(cls,crawler):
return cls(crawler)
def process_request(self,request,spider):#必须和内置的一致,这里必须这样写
def get_ua():
return getattr(self.ua,self.ua_type)
request.headers.setdefault('User-Agent',get_ua())
settings里面的配置
DOWNLOADER_MIDDLEWARES = {
'ArticleSpider.middlewares.RandomUserAgentMiddlware': 543, #将在middlewares.py中定义了RandomUserAgentMiddlware类添加到这里;
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware':None, #需要将scrapy默认的置为None不调用
}
RANDOM_UA_TYPE = "random" #或者指定浏览器 firefox、chrome...
PS:配置好后取消原来spider中定义的User-Agent。再次进行爬虫时,会自动携带随机生成的User-Agent,不需要在每个spider中自定义了;