爬虫实战(一):爬网易云翻译

文章目录

  • 分析网页
  • 利用Python模拟加密
  • 代码

分析网页

通过浏览器抓包可得

爬虫实战(一):爬网易云翻译_第1张图片

通过JS逆向可以得到

爬虫实战(一):爬网易云翻译_第2张图片

利用Python模拟加密

word = input("请输入要翻译的单词")
# 时间戳
import time
ts = r = str(int(time.time() * 1000))
import random
salt = i = r + random.randint(0, 10)  # 拼接

# md532位加密
import hashlib
md = hashlib.md5()
md.update(f"fanyideskweb{word+i}Y2FYu%TNSbMCxc3t2u^XT".encode())
sign = fty.hexdigest()  # 十六进制32位

准备阶段完成,上代码

代码

import requests, time, random, hashlib

word = input("请输入您要翻译的单词:")
url = "https://fanyi.youdao.com/translate_o?smartresult=dict&smartresult=rule"

r = str(int(time.time() * 1000))  # 时间戳的整数
i = r + str(random.randint(0, 10))  # 时间戳加上0到9的随机数
fty = hashlib.md5()
fty.update(f"fanyideskweb{word+i}Y2FYu%TNSbMCxc3t2u^XT".encode())
sign = fty.hexdigest()  # 十六进制

data = {
    'i': word,
    'from': 'AUTO',
    'to': 'AUTO',
    'smartresult': 'dict',
    'client': 'fanyideskweb',
    'salt': i,
    'sign': sign,  # md5加密,32位
    'lts': r,  # 时间戳
    'bv': 'e70edeacd2efbca394a58b9e43a6ed2a',  # 固定值
    'doctype': 'json',
    'version': '2.1',
    'keyfrom': 'fanyi.web',
    'action': 'FY_BY_REALTlME',
}
headers = {
    "Accept": 'application/json, text/javascript, */*; q=0.01',
    "Accept-Encoding": 'gzip, deflate, br',
    "Accept-Language": 'zh-CN,zh;q=0.9',
    "Connection": 'keep-alive',
    "Content-Length": '240',
    "Content-Type": 'application/x-www-form-urlencoded; charset=UTF-8',
    "Cookie": '[email protected]; JSESSIONID=aaa4cDSAL97NifsHDVP6x; OUTFOX_SEARCH_USER_ID_NCOO=1028423508.6875325; ___rl__test__cookies=1643526331520',
    "Host": 'fanyi.youdao.com',
    "Origin": 'https://fanyi.youdao.com',
    "Referer": 'https://fanyi.youdao.com/',
    "sec-ch-ua": '" Not A;Brand";v="99", "Chromium";v="96", "Google Chrome";v="96"',
    "sec-ch-ua-mobile": '?0',
    "sec-ch-ua-platform": '"Windows"',
    "Sec-Fetch-Dest": 'empty',
    "Sec-Fetch-Mode": 'cors',
    "Sec-Fetch-Site": 'same-origin',
    "User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36',
    "X-Requested-With": '"XMLHttpRequest"'
}
resp = requests.post(url=url, data=data, headers=headers).json()
print(resp)

你可能感兴趣的:(爬虫实战,python,爬虫,https,http)