Python网络爬虫入门(二)

requests库

import requests
#get方法
response= requests.get("url")
print(response.content.decode('utf-8'))#response.content是一个bytes类型 

params={'wd':'中国'}
headers={}
response=requests.get("url",params=params,headers=headers)
with open('   .html','w',encoding='utf-8') as fp:
fp.write(response.decode('utf-8'))#把文件写入本地
print(response.url)

#post方法
import requests
data={}
headers={ 'Referer':   ,'User-Agent':}
response=resquests.post('url',data=data,headers=headers)
print(response.text)#response.json()将字符串转换为字典或列表

使用代理

import requests
proxy={'http':' : '}
response=requests.get(url,proxies=proxy)
print(requests.text)#requests.text返回一个经过编码后的字符串是str类型

处理cookie

import requests
url=" "
data={}
headers={ }
session=requests.Session()
session.post(url,data=data,headers=headers)
response=session.get(url)
with open(' .html','w',encoding='utf-8') as fp:
	fp.write(response.text)

你可能感兴趣的:(python爬虫)