第五十五课:论一只爬虫的自我修养:隐藏

内容来源于网络,本人只是在此稍作整理,如有涉及版权问题,归小甲鱼官方所有。

0.请写下这一节课你学习到的内容:格式不限,回忆并复述是加强记忆的好方式!

  • 修改User-Agent
    下图是urllib.request.Request部分有关于设置User-Agent的叙述。
第五十五课:论一只爬虫的自我修养:隐藏_第1张图片
设置User-Agent.png

设置这个headers参数有两种办法:
1️⃣实例化Request参数时将headers参数传进去。

import urllib.request
import urllib.parse
import json

content = input("请输入需要翻译的内容:")
url = "http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=http://www.youdao.com"
head = {}
head['Referer'] = 'http://fanyi.youdao.com/'
head['User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'

data = {}
data['type'] = 'AUTO'
data['i'] = content
data['doctype'] = 'json'
data['version'] = '2.1'
data['keyfrom'] = 'fanyi.web'
data['ue'] = 'UTF-8'
data['typoResult'] = 'true'

data = urllib.parse.urlencode(data).encode('utf-8')
req = urllib.request.Request(url, data, headers=head)
response = urllib.request.urlopen(req)
html = response.read().decode('utf-8')
target = json.loads(html)
print("翻译结果:%s" % (target['translateResult'][0][0]['tgt']))
print(req.headers)

输出:

请输入需要翻译的内容:我爱你
翻译结果:I love you
{'Referer': 'http://fanyi.youdao.com/', 'User-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'}

2️⃣通过add_header()方法往Request对象添加headers。

import urllib.request
import urllib.parse
import json

content = input("请输入需要翻译的内容:")
url = "http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=http://www.youdao.com"
data = {}
data['type'] = 'AUTO'
data['i'] = content
data['doctype'] = 'json'
data['version'] = '2.1'
data['keyfrom'] = 'fanyi.web'
data['ue'] = 'UTF-8'
data['typoResult'] = 'true'

data = urllib.parse.urlencode(data).encode('utf-8')
req = urllib.request.Request(url, data)
req.add_header('Referer', 'http://fanyi.youdao.com')
req.add_header('User-Agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36')
response = urllib.request.urlopen(req)
html = response.read().decode('utf-8')
target = json.loads(html)
print("翻译结果:%s" % (target['translateResult'][0][0]['tgt']))
print(req.headers)

输出:

请输入需要翻译的内容:I want to go back.
翻译结果:我想回去。
{'Referer': 'http://fanyi.youdao.com', 'User-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'}
  • 如果同一个IP地址在短时间内对服务器访问很频繁,显然这不是正常人操作的,所以服务器会设置每个IP的访问频率,一旦超过这个阙值,便认为就是爬虫,于是返回一个验证码页面,要求用户填写验证码,爬虫自然就不能正常爬取信息了,所以被拒绝。所以想到有2种方法可以解决这种情况:

1️⃣延迟提交的时间;

import urllib.request
import urllib.parse
import json
import time

url = "http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=http://www.youdao.com"
while True:
    content = input('请输入待翻译的内容(输入"q!"退出程序"):')
    if content == 'q!':
        break
    data = {}
    data['type'] = 'AUTO'
    data['i'] = content
    data['doctype'] = 'json'
    data['version'] = '2.1'
    data['keyfrom'] = 'fanyi.web'
    data['ue'] = 'UTF-8'
    data['typoResult'] = 'true'
    data = urllib.parse.urlencode(data).encode('utf-8')
    req = urllib.request.Request(url, data)
    req.add_header('Referer', 'http://fanyi.youdao.com')
    req.add_header('User-Agent',
                   'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36')
    response = urllib.request.urlopen(req)
    html = response.read().decode('utf-8')
    target = json.loads(html)
    print("翻译结果:%s" % (target['translateResult'][0][0]['tgt']))
    time.sleep(5)  # 5秒

2️⃣使用代理;

import urllib.request

url = 'http://www.whatismyip.com.tw/'
proxy_support = urllib.request.ProxyHandler({'http':'211.138.121.38:80'})
# 接着创建一个包含代理IP的opener
opener = urllib.request.build_opener(proxy_support)
# 安装进默认环境
urllib.request.install_opener(opener)
# 试试看IP地址改了没
response = urllib.request.urlopen(url)
html = response.read().decode('utf-8')
print(html)
import urllib.request
import random

url = 'http://www.whatismyip.com.tw/'
print("添加代理IP地址(IP:端口号),多个IP地址间用英文的分号隔开!")
iplist = input("请开始输入:").split(sep=";")
while True:
    ip = random.choice(iplist)
    proxy_support = urllib.request.ProxyHandler({'http':ip})
    opener = urllib.request.build_opener(proxy_support)
    opener.addheaders = [('User-Agent',
                   'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36')]
    urllib.request.install_opener(opener)
    try:
        print("正在尝试使用%s访问..." % ip)
        response = urllib.request.urlopen(url)
    except urllib.error.URLError:
        print("访问出错!")
    else:
        print("访问成功!")
    if input("请问是否继续?(Y/N)") == 'N':
        break

你可能感兴趣的:(第五十五课:论一只爬虫的自我修养:隐藏)