Python2.7 Scrapy爬取https URL报“SSL: CERTIFICATE_VERIFY_FAILED”

用Scrapy在爬取以https开头的地址时报“SSL: CERTIFICATE_VERIFY_FAILED”的错误;

错误原因:python2.7的urllib.urlopen https的URL时会验证一次SSL证书,当目标网站使用自签名时就会报错该错误。

解决方法:

1、引入ssl,在使用urlopen时传入认证内容

import ssl

# This restores the same behavior as before.
context = ssl._create_unverified_context()

urllib.urlopen("https://stackoverflow.com/questions/27835619/urllib-and-ssl-certificate-verify-failed-error", context=context)

2、用重写的方式创建全局context

import ssl

ssl._create_default_https_context = ssl._create_unverified_context

3、使用http经典包requests时,设置全局context

import ssl

import requests

requests.packages.urllib3.disable_warnings()

try:
    _create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
    # Legacy Python that doesn't verify HTTPS certificates by default
    pass
else:
    # Handle target environment that doesn't support HTTPS verification
    ssl._create_default_https_context = _create_unverified_https_context

你可能感兴趣的:(Python)