发送post请求、携带cookie、响应对象、高级用法

发送post请求

请求体中,两种方式:data={} ⇢ \dashrightarrow 编码格式

  1. urlencoded ⇢ \dashrightarrow key=value&key=value
  2. json={} ⇢ \dashrightarrow 编码格式是json

使用方式:
res=requests.post('url')

模拟登录

import requests

data = {
    'username': '[email protected]',
    'password': 'lqz123',
    'captcha': 'xxxx',
    'ref': 'http://www.aa7a.cn/',
    'act': 'act_login',
}
res = requests.post('http://www.aa7a.cn/user.php', data=data)
print(res.text)

携带cookie

  1. 请求头中
  2. 带在cookie参数中

请求头中携带

header = {
    'Cookie': 'cto_bundle=_DpLll9WcDY3UCUyRlBnWDZtZEp5MVFZZHVHeExtdmtwNjVVYkh6SUglMkZoVmFaSWFtc1JrZWo0aVRURzdOeUhTUHpTekslMkJ0RjRnOXhqOWhXM0piM0hXVFlVVk9MUGpNWFdCS0dveFBFSEhHVmJRV01uVWZ1bjloTU9YY2VwQWF2NWxFQWhkZnM5TERxMVlUUkt4WUo2dWR4QkZOMmclM0QlM0Q; _jzqx=1.1691985966.1691985966.1.jzqsr=aa7a%2Ecn|jzqct=/.-; ECS_ID=a9bd0486ba5b0f2f65637f64719d72e75add4995; ECS[visit_times]=8; _jzqa=1.2593057364546894000.1672632545.1691985966.1698896326.9; _jzqc=1; _jzqy=1.1672632545.1698896326.1.jzqsr=baidu.-; _jzqckmp=1; Hm_lvt_c29657ca36c6c88e02fed9a397826038=1698896327; mediav=%7B%22eid%22%3A%22179539%22%2C%22ep%22%3A%22%22%2C%22vid%22%3A%22%22%2C%22ctn%22%3A%22%22%2C%22vvid%22%3A%22%22%2C%22_mvnf%22%3A1%2C%22_mvctn%22%3A0%2C%22_mvck%22%3A1%2C%22_refnf%22%3A0%7D; __xsptplus422=422.9.1698896372.1698896372.1%234%7C%7C%7C%7C%7C%23%23%23; Qs_lvt_201322=1686282645%2C1691979017%2C1691979065%2C1698896326%2C1698896643; __xsptplusUT_422=1; _qzjc=1; ECS[username]=616564099%40qq.com; ECS[user_id]=61399; ECS[password]=4a5e6ce9d1aba9de9b31abdf303bbdc2; _qzjb=1.1698896326296.7.0.0.0; _qzjto=7.1.0; _qzja=1.1154697895.1672632545402.1691985966161.1698896326296.1698896649094.1698896661921.616564099%2540qq_com.1.0.23.8; Hm_lpvt_c29657ca36c6c88e02fed9a397826038=1698896662; _jzqb=1.18.10.1698896326.1; Qs_pv_201322=1719398001187156200%2C1811149594340033000%2C1691949923920640300%2C4523589869790951000%2C2247541305320030700'
}
res = requests.get('http://www.aa7a.cn/',headers=header)
自动点赞案例
data = {
    'linkId': '40480287'
}
header = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36',
    'Cookie': 'deviceId=web.eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJqaWQiOiIwNGZlMmMxYy00Mjc4LTRlMzMtYjM5OS0zNTI0NTNiNzQ3ZDkiLCJleHBpcmUiOiIxNjgxNDQyNDI2ODUxIn0.k41tJ-jL5Y6TO4BabrtzUv8N35eLrfqfdvBdXEmtqxE; __snaker__id=kNv5rzTfGa3xsaqh; YD00000980905869%3AWM_TID=slUolCNNTm5FRQRAEVaBfKlPbFQL7kFG; Hm_lvt_03b2668f8e8699e91d479d62bc7630f1=1698896819; gdxidpyhxdE=4hhv3p0G9z1GUt%2BbmquM7o8uxAnU21e5Xv0hEnOwniLE617vQyijd%2FRfrVM3c8u%2Bb%2BSzE%2B612mU4JXDi6SwQC6tAIHEf9RPVR%2FZQB9AH0V4gazYRMoMlmhHeYBgfjGfe%2BhbYbX3x%2BdKE2dQUo1tJuYMCZr%2B8QdQapLAChI%5CbpM%2BhkIne%3A1698897721336; YD00000980905869%3AWM_NI=817ayMhIabFSD%2BFdoaOokCH5%2FLBINb0apxw2XgvsY2zLJrWi0bgIB1eE%2FAPrdZIt4NVgofeOgPy92Y73HLGf70hYzYHH%2BKz%2FlrsYT9j4KFUbo6Vx4OY6R6o0nZ7HJYrTYm0%3D; YD00000980905869%3AWM_NIKE=9ca17ae2e6ffcda170e2e6eeccea7c97bff99ad04eb3bc8bb6c44f839a8e83d463b889f8b9c25ebc868ad5f52af0fea7c3b92afceff982fc47ac97af83c55988f58a85ee6b8db1a5a4cb3fed948eaacd6af4bfc099f63a9cb9ba94dc3aba948693d663f4bea9aac13ba79be1a2d37aa8868795b74696b9a187e845a590b7dab153f5ed88adc66ef5f0a8baf9538c98e5acf925b0aebdbad733aaec85a7d86aadea9a99b7459bea9ed7ee3cae86f9a9c953b28c978eb337e2a3; token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJqaWQiOiJjZHVfNTMyMDcwNzg0NjAiLCJleHBpcmUiOiIxNzAxNDg4ODM3Nzg0In0.3zOo3zk8OEO2kmYOX2ExY_oc6wm3eeM0y9VUyA4er-w; Hm_lpvt_03b2668f8e8699e91d479d62bc7630f1=1698896838'
}

res = requests.post('https://dig.chouti.com/link/vote', data=data, headers=header)
print(res.text)

方式二:使用cookie参数

使用cookie参数,类型为字典形式

s = 'cto_bundle=_DpLll9WcDY3UCUyRlBnWDZtZEp5MVFZZHVHeExtdmtwNjVVYkh6SUglMkZoVmFaSWFtc1JrZWo0aVRURzdOeUhTUHpTekslMkJ0RjRnOXhqOWhXM0piM0hXVFlVVk9MUGpNWFdCS0dveFBFSEhHVmJRV01uVWZ1bjloTU9YY2VwQWF2NWxFQWhkZnM5TERxMVlUUkt4WUo2dWR4QkZOMmclM0QlM0Q; _jzqx=1.1691985966.1691985966.1.jzqsr=aa7a%2Ecn|jzqct=/.-; ECS_ID=a9bd0486ba5b0f2f65637f64719d72e75add4995; ECS[visit_times]=8; _jzqa=1.2593057364546894000.1672632545.1691985966.1698896326.9; _jzqc=1; _jzqy=1.1672632545.1698896326.1.jzqsr=baidu.-; _jzqckmp=1; Hm_lvt_c29657ca36c6c88e02fed9a397826038=1698896327; mediav=%7B%22eid%22%3A%22179539%22%2C%22ep%22%3A%22%22%2C%22vid%22%3A%22%22%2C%22ctn%22%3A%22%22%2C%22vvid%22%3A%22%22%2C%22_mvnf%22%3A1%2C%22_mvctn%22%3A0%2C%22_mvck%22%3A1%2C%22_refnf%22%3A0%7D; __xsptplus422=422.9.1698896372.1698896372.1%234%7C%7C%7C%7C%7C%23%23%23; Qs_lvt_201322=1686282645%2C1691979017%2C1691979065%2C1698896326%2C1698896643; __xsptplusUT_422=1; _qzjc=1; ECS[username]=616564099%40qq.com; ECS[user_id]=61399; ECS[password]=4a5e6ce9d1aba9de9b31abdf303bbdc2; _qzjb=1.1698896326296.7.0.0.0; _qzjto=7.1.0; _qzja=1.1154697895.1672632545402.1691985966161.1698896326296.1698896649094.1698896661921.616564099%2540qq_com.1.0.23.8; Hm_lpvt_c29657ca36c6c88e02fed9a397826038=1698896662; _jzqb=1.18.10.1698896326.1; Qs_pv_201322=1719398001187156200%2C1811149594340033000%2C1691949923920640300%2C4523589869790951000%2C2247541305320030700'
res = requests.get('http://www.aa7a.cn/', cookies={item[0] :item[1] for  item in s.split(':')})
print('[email protected]' in res.text)
登录成功 ⇢ \dashrightarrow 响应头中有cookie

获得cookie: res.cookie

data = {
    'username': '[email protected]',
    'password': 'lqz123',
    'captcha': 'xxxx',
    'ref': 'http://www.aa7a.cn/',
    'act': 'act_login',
}
res = requests.post('http://www.aa7a.cn/user.php', data=data)
print(res.text)
拿到登录后的cookie,再向首页发送请求
res = requests.get('http://www.aa7a.cn/', cookies=res.cookies)
print('[email protected]' in res.text)

session 的使用

原来每次都需要手动携带cookie

from requests import Session
session=Session()
data = {
    'username': '[email protected]',
    'password': 'lqz123',
    'captcha': 'xxxx',
    'ref': 'http://www.aa7a.cn/',
    'act': 'act_login',
}
res = session.post('http://www.aa7a.cn/user.php', data=data)
res = session.get('http://www.aa7a.cn/')  #  可以自动处理cookie,不需要手动携带
print('[email protected]' in res.text)

响应对象

http请求 ⇢ \dashrightarrow python中 ⇢ \dashrightarrow 变成了俩对象 ⇢ \dashrightarrow request对象 response对象

  1. 后端:django

    • request ⇢ \dashrightarrow http请求被包装成了python的对象

    • 四件套 ⇢ \dashrightarrow http的响应被包装成了response

  2. 爬虫:

    • request对象 ⇢ \dashrightarrow request.get ⇢ \dashrightarrow http请求
    • response对象 ⇢ \dashrightarrow 返回的数据- ⇢ \dashrightarrow http响应包装成了response对象

后端和前端所叫的 request和response虽然不是同一个类的对象,但是他们的内容基本一致 ⇢ \dashrightarrow 本质都是http请求和响应

响应对象方法

  1. respone.text:响应体转成了字符串
  2. respone.content:响应体的二进制
  3. respone.status_code:响应状态码
  4. respone.headers:响应头
  5. respone.cookies.get_dict()
  6. respone.cookies.items():响应cookie ⇢ \dashrightarrow 转成key-value形式
  7. respone.url:请求地址
  8. respone.history:了解:访问过的地址 针对于 重定向的情况,才会有值
  9. respone.encoding:响应编码

下载一个视频

respone = requests.get('https://img3.chouti.com/CHOUTI_231102_8B10E74FBE2646748DE951D4EAB1F1E7.jpg')
# print(respone.content)

# 保存到本地--》就是这张图片
with open('致命诱惑.jpg', 'wb') as f:
    # f.write(respone.content)
    # for line in respone.iter_content(1024):
    for line in respone.iter_content():  
        f.write(line)

高级用法

  1. 乱码问题
    respone.text ⇢ \dashrightarrow 从服务器获取回来是byte格式
    ⇢ \dashrightarrow 默认用utf-8转码成字符串
    respone.encoding='gbk 再打印

  2. 解析json

import requests
response=requests.get('http://httpbin.org/get')

import json
res1=json.loads(response.text) #太麻烦

res2=response.json() #直接获取json数据

print(res1 == res2) #True
  1. 关于https报错问题
    以后访问https地址时,可以会报错,访问https需要携带证书 ⇢ \dashrightarrow 没带就会报错
    想要不携带证书 ⇢ \dashrightarrow 不报错
    不验证证书,报警告,返回200:respone=requests.get('https://www.12306.cn',verify=False) 不验证证书,报警告,返回200

    • 解决警告:
      import urllib3
      urllib3.disable_warnings()
      respone=requests.get('https://www.12306.cn',verify=False)
      print(respone.text)
      
  2. 超时设置
    如果多少时间没返回就报错

import requests
respone=requests.get('https://www.baidu.com',timeout=0.0001)
  1. 认证设置 ⇢ \dashrightarrow 不需要知道了,现在没有这种了 ⇢ \dashrightarrow 不是前端认证

  2. 异常处理

  3. 上传文件

import requests
files = {'file': open('a.jpg', 'rb')}
respone = requests.post('http://httpbin.org/post', files=files)
print(respone.status_code)
  1. 后端存储:request.FILES.get('file')

你可能感兴趣的:(爬虫,python,爬虫)