Scrapy ip代理无响应

Scrapy ip代理长时间无响应原因分析

在setting.py中添加可用ip代理池:

PROXIES=[
    'http://182.149.82.74:9999',
    'http://121.237.25.238:3000',
    'http://61.183.176.122:57210',
    'http://175.43.84.29:9999',
]

在中间件middlewares.py中添加如下类:

import scrapy
from scrapy import signals
import random

class ProxyMiddleware(object):

    def __init__(self, ip):
        self.ip = ip

    @classmethod
    def from_crawler(cls, crawler):
        return cls(ip=crawler.settings.get('PROXIES'))

    def process_request(self, request, spider):
        ip = random.choice(self.ip)
        request.meta['http_proxy'] = ip
        print("当前ip为:"+ip)

setting.py文件的DOWNLOADER_MIDDLEWARES属性中添加中间件:

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddleware.useragent.UserAgentMiddleware': None, 
    'myproject.middlewares.MyUserAgentMiddleware': 400,
}

一开始我在中间件中添加代理IP部分的代码为:

request.meta['proxy'] = ip

我的python版本为3.7,Scrapy为1.6.0,可能由于版本问题,设置代理一直不成功,改为:

request.meta['http_proxy'] = ip

之后,代理不成功的问题成功解决!!!

你可能感兴趣的:(Error解决,python,中间件,http,bug)