scrapy利用Ajax爬取拉钩网问题总结

1. 使用付费IP代理(阿布云的动态HTTP隧道,每次请求自动切换IP)
2. 替换User_Agent
3. 补全header内容(并不确定缺少哪些项会导致被反爬虫)

USER_AGENTS = [
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
    "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
    "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
    "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
    "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
    "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
    "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
    "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
]

proxyServer = "http://http-dyn.abuyun.com:9020"

proxyUser = "H201144R93N43URD"
proxyPass = "4BE2A6B831CDDEC7"

proxyAuth = "Basic " + base64.urlsafe_b64encode(bytes((proxyUser + ":" + proxyPass), "ascii")).decode("utf8")

class ProxyMiddleware(object):
    def process_request(self, request, spider):
        request.meta["proxy"] = proxyServer
        request.headers["Proxy-Authorization"] = proxyAuth
        request.headers['User-Agent']=random.choice(USER_AGENTS)
        request.headers['Host']='www.lagou.com'
        request.headers['Origin']='https://www.lagou.com'
        request.headers['Referer']="https://www.lagou.com/jobs/list_"
        request.headers['X-Anit-Forge-Code']='0'
        request.headers['X-Anit-Forge-Token']=None
        request.headers['X-Requested-With']="XMLHttpRequest"
        request.headers['Accept']='application/json, text/javascript, */*; q=0.01'
        request.headers['Accept-Encoding']='gzip, deflate, br'
        request.headers['Accept-Language']='zh-CN,zh;q=0.9,en;q=0.8,ja;q=0.7'

4. 设置动态的cookie (应该为每一个request都设置cookie,下面只列出一个)

用uuid库产生随机数,在scrapy中设置动态cookie非常重要。未设置的时候,即使每次都用不同IP请求,也会被反爬虫(即 ‘您操作太频繁,请稍后再访问’)。经实验发现,scrapy若不设置cookie,连续的几个请求中的cookie的JSESSIONIDuser_trace_token字段是分别相同的,所以导致这几次‘相同’的请求被反爬虫。

    def parse(self,response):
        # print('first head')
        # print(response.request.headers)
        #只爬技术类
        ajax_url = "https://www.lagou.com/jobs/positionAjax.json?kd={}"
        class_list=response.xpath("//a[@data-lg-tj-id='4O00']/text()").extract()
        # print('class_list : ',class_list)
        for job_class in class_list:
            print("yield:",job_class)
            yield scrapy.Request(url=ajax_url.format(job_class),cookies={'JSESSIONID':uuid.uuid4(),
                                                                         'user_trace_token':uuid.uuid4()},callback=self.parse_page,dont_filter=True,meta={'pn':1,'job_class':job_class})

5. 处理redirect
偶尔会有请求被redirect至一个无效的url,估计为反爬虫策略的一种。

[scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to  from 

向redirect的url请求时,会得到一个无效页面,但响应为200

[scrapy.core.engine] DEBUG: Crawled (200)  from  (referer: https://www.lagou.com/jobs/list_)

解决方法为:在parse_page处判断是否正在解析跳转后的无效页面,然后取得跳转前的url(response.meta['redirect_urls'][0]),再次向该url发出请求

 def parse_page(self,response):
        ajax_url = "https://www.lagou.com/jobs/positionAjax.json?kd={}&pn={}"
        try:
            result = json.loads(response.text)["content"]["positionResult"]["result"]
        except json.decoder.JSONDecodeError as e:
            if 'redirect_urls'in response.meta:                                   #被redirect到无效地址,从response.meta['redirect_urls']中获取redirect之前的url,重新请求该url
                logging.info('redirect request:'+str(response.meta['redirect_urls']))
                re_url=response.meta['redirect_urls'][0]
            else:                                                                  #请求过频繁,再次请求该url
                re_url=response.url
            yield scrapy.Request(url=re_url,cookies={'JSESSIONID':uuid.uuid4(),
                                                                            'user_trace_token':uuid.uuid4()},callback=self.parse_page,dont_filter=True,meta=response.meta)
            return

6. 检查url正确性(http编码问题)

2018-01-05 16:30:34 [scrapy.core.engine] DEBUG: Crawled (200)  (referer: https://www.lagou.com/jobs/list_)
2018-01-05 16:30:34 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.lagou.com/jobs/positionAjax.json?kd=C>

scrapy显示爬取https://www.lagou.com/jobs/positionAjax.json?kd=C#&pn=8
但实际上是爬取了https://www.lagou.com/jobs/positionAjax.json?kd=Cscrapy并没有对#字符进行正确的http编码,直接向https://www.lagou.com/jobs/positionAjax.json?kd=C#&pn=8请求,根据http协议,会忽略掉#的内容,从而变成向https://www.lagou.com/jobs/positionAjax.json?kd=C请求。
在scrapy中,我们应该把#转换为正确的http编码形式,即%23

 count=0
        for i in class_list:
            if i=='C#':
                class_list[count]='C%23'
            count+=1

        for job_class in class_list:
            print("yield:{} url: {}".format(job_class,ajax_url.format(job_class,1)))
            logging.info("yield:{} url: {}".format(job_class,ajax_url.format(job_class,1)))
            yield scrapy.Request(url=ajax_url.format(job_class,1),cookies={'JSESSIONID':uuid.uuid4(),
                                                                         'user_trace_token':uuid.uuid4()},callback=self.parse_page,dont_filter=True,meta={'pn':1,'job_class':job_class,'max_retry_times':5})

你可能感兴趣的:(scrapy利用Ajax爬取拉钩网问题总结)