scrapy防Ban设置

1、轮换出口IP

用scrapinghub提供的代理,因为是国外的IP,所以访问百度比国内要慢一些,但是提供的代理很稳定,方便配置,且免费,貌似没有使用次数的限制。

在sittings.py中添加:

    '''crawlera账号、密码'''

    CRAWLERA_ENABLED = True

    CRAWLERA_USER = '账号'

    CRAWLERA_PASS = '密码'

    '''下载中间件设置'''

    DOWNLOADER_MIDDLEWARES = {

    'scrapy_crawlera.CrawleraMiddleware': 600

    }

2、轮换UA

在sittings.py添加:

USER_AGENTS = [

"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",

"Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",

.......

]

'''下载中间件设置'''

DOWNLOADER_MIDDLEWARES = {

'tutorial.middlewares.RandomUserAgent': 1,

}

在middlewares.py添加:

class RandomUserAgent(object):

"""Randomly rotate user agents based on a list of predefined ones"""

def __init__(self, agents):

self.agents = agents

@classmethod

def from_crawler(cls, crawler):

return cls(crawler.settings.getlist('USER_AGENTS'))

def process_request(self, request, spider):

#print "**************************" + random.choice(self.agents)

request.headers.setdefault('User-Agent', random.choice(self.agents))

3、轮换Cookie,并完全模拟浏览器请求头

在sittings.py添加:

def getCookie():

cookie_list = [

'cookie1', #自己从不同浏览器中获取cookie在添加到这

'cookie2',

......

]

cookie = random.choice(cookie_list)

return cookie

'''设置默认request headers'''

DEFAULT_REQUEST_HEADERS = {

'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',

'Accept-Encoding':'gzip, deflate, sdch',

'Accept-Language':'zh-CN,zh;q=0.8,en;q=0.6',

'Cache-Control':'max-age=0',

'Connection':'keep-alive',

'Host':'www.baidu.com',

'RA-Sid':'7739A016-20140918-030243-3adabf-48f828',

'RA-Ver':'3.0.7',

'Upgrade-Insecure-Requests':'1',

'Cookie':'%s' % getCookie()

}

sittings.py添加其他配置项:

'''下载延时,即下载两个页面的等待时间'''

DOWNLOAD_DELAY = 0.5

'''并发最大值'''

CONCURRENT_REQUESTS = 100

'''对单个网站并发最大值'''

CONCURRENT_REQUESTS_PER_DOMAIN = 100

'''启用AutoThrottle扩展,默认为False'''

AUTOTHROTTLE_ENABLED = False

'''设置下载超时'''

DOWNLOAD_TIMEOUT = 10

'''降低log级别,取消注释则输出抓取详情'''

LOG_LEVEL = 'INFO'

你可能感兴趣的:(scrapy防Ban设置)