scrapy使用代理

最近一直访问网站访问不了,以为是网站在维护一直没有管它,直到客户截图发过来,我才发现自己手机也是可以访问的,就是使用了公司的网络和wifi访问不了。这这才意识到ip被封了。坑爹的是运维给了一个代理,我用不了,用的快代理也不行,一度怀疑是代码的问题,自己对代码不熟悉,看了很多人写的文章和我写的大同小异。直到使用了西刺代理中的一个ip,爬虫脚本才跑起来。

scrapy使用代理:

首先可以定时用脚本去爬取代理ip:

import requests
from lxml import etree

from bankproduct.settings import MONGO_URI, MONGO_DATABASE
from bankproduct.util.dbhelper import MongodbBaseDao


class ProxyUtil(object):
    proxy_url = "https://www.kuaidaili.com/free/inha/"

    def __init__(self):
        self.dao = MongodbBaseDao(MONGO_URI, MONGO_DATABASE)

    def save_proxies(self):
        proxies = self.crawl_proxies()
        self.dao.drop('proxy')
        self.dao.insert_many('proxy', proxies)

    def crawl_proxies(self):
        results = []
        response = requests.get(self.proxy_url, verify=False)
        tree = etree.HTML(response.content)
        proxies = tree.xpath("//*[@id='list']/table/tbody/tr")
        for index, proxy in enumerate(proxies):
            result = proxy.xpath("td[@data-title='类型']/text()")[0].lower() \
                     + "://" \
                     + proxy.xpath("td[@data-title='IP']/text()")[0] \
                     + ":" \
                     + proxy.xpath("td[@data-title='PORT']/text()")[0]
            results.append({'url': result})
        return results
  • 方案一:使用中间件
    编写中间件:
class ProxyMiddleWare(object):
     # 从数据库中随机取一个代理IP,每次请求的时候都使用代理
    def process_request(self, request, spider):
        dao = MongodbBaseDao(MONGO_URI, MONGO_DATABASE)
        proxy = random.choice(list(dao.find('proxy', {})))
        request.meta['proxy'] = proxy['url']
        pass
    # 每次异常的时候使用代理
    def process_exception(self, request, response, spider):
        request.meta['proxy'] = 'http://{0}'.format("59.44.247.194:9797")

# 每次请求的系统信息
class UserAgentMiddleWare(object):
    user_agent_list = [
        "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
        "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727;"
        " Media Center PC 5.0; .NET CLR 3.0.04506)",
        "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322;"
        " .NET CLR 2.0.50727)",
        "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
        "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729;"
        " .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
        "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727;"
        " .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
        "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2;"
        " .NET CLR 3.0.04506.30)",
        "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3)"
        " Arora/0.3 (Change: 287 c9dfb30)",
        "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
        "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
        "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
        "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
        "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/"
        "19.0.1036.7 Safari/535.20",
        "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
    ]

    def process_request(self, request, spider):
        request.headers['USER_AGENT'] = random.choice(self.user_agent_list)

修改settings.py配置文件:

DOWNLOADER_MIDDLEWARES = {
   'bankproduct.middlewares.ProxyMiddleWare': 100
   'bankproduct.middlewares.UserAgentMiddleWare': 300,
}
  • 方案二:在爬虫脚本中每次请求添加代理(用到的地方都加)
def start_requests(self):
        yield scrapy.FormRequest(self.start_url, method="POST", formdata=self.form_data,meta={'proxy':"http://59.44.247.194:9797"})
        pass

遇到一个奇葩的问题,找运维要的代理只能用一次,第二次就不行了,但是自己电脑开vpn却是正常的,经过检查日志发现每条记录被反复爬取了几十遍,检查代码才发现原来是自己的一个空格导致死循环了,真的怀疑是我把对方的网站搞得不得不加反爬取限制了。

代理地址
1.西刺代理
2.无忧代理
3.快代理

你可能感兴趣的:(scrapy使用代理)