关于拉勾网的scrapy crawlspider爬虫出现的302问题的解决方式

关于拉勾网的爬虫,课程上讲解的视频在正在执行的时候会出现:DEBUG: Redirecting (302) to from ,这个302错误,查找了一些别人的博客https://blog.csdn.net/qq_26582987/article/details/79703317上面的相关的解决方式,即加上在每个请求上加上cookies和headers即可,但是在作者的代码上出现

  def start_requests(self):
        self.cookies = selenium_login.login_lagou()
        print (type(self.cookies))
        print(self.headers)
        yield Request(url=self.start_urls[0],
                             cookies=self.cookies,
                             headers=self.headers,
                             callback=self.parse,
                             dont_filter=True)
在crawlspider中实现登陆,有时验证码复杂,还没输入完毕就出现页面的自动跳转,如果要爬虫多次,则多要多次登陆,

此时,可以将一次登陆之后的结果保存到json文件中,后续的登陆直接读取这个json文件即可,附上代码

LoginLaGou.json

if __name__ == "__main__":
    with open("cookies.json", "r", encoding='utf-8') as f:
        # indent 超级好用,格式化保存字典,默认为None,小于0为零个空格
        #f.write(json.dumps(login_lagou(), indent=4))
         print(json.dumps(f.read()))

在crawlspider中

def start_requests(self):

     #读取cookies.json文件
    with open(os.path.join(os.path.dirname(__file__,),"cookies.json"), "r", encoding='utf-8') as f:
        self.cookies=json.loads(f.read())

    self.myheaders = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
        'Accept-Encoding': 'gzip, deflate, br',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Host': 'www.lagou.com',
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36",
    }
    yield scrapy.Request(url=self.start_urls[0], cookies=self.cookies, headers=self.myheaders, callback=self.parse,
                         dont_filter=True)s

 

你可能感兴趣的:(crapy爬虫)