爬虫翻译报错信息

爬取的时候明明爬到的是translate的url,却爬到了首页

import requests
import json
url = 'https://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule'
headers = {
     
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'}
content=input('输入:')
data = {
     
    "i": content,
    "from": " AUTO",
    "to": " AUTO",
    "smartresult": " dict",
    "client": " fanyideskweb",
    "salt": " 16190082143222",
    "sign": " ccdd8deeedaab25b0e70b7be40dea4d5",
    "lts": " 1619008214322",
    "bv": " 3d91b10fc349bc3307882f133fbc312a",
    "doctype": " json",
    "version": " 2.1",
    "keyfrom": " fanyi.web",
    "action": " FY_BY_REALTlME",
}
res =requests.post(url,data=data,headers=headers)
res.encoding='utf-8'
html=res.text
print(html)
# r =json.loads(html)
# r =r['translateResult']
# r1 =r[0][0]['tgt']
# print(r1)

爬虫翻译报错信息_第1张图片

错误原因:

爬虫翻译报错信息_第2张图片

删除多余空格后:

在这里插入图片描述

加上json.loads解析

r =json.loads(html)
r =r['translateResult']
r1 =r[0][0]['tgt']
print(r1)

在这里插入图片描述

你可能感兴趣的:(python报错信息搜集,python,json,post)