用来模拟浏览器请求网页服务器
urllib.request.urlopen(headers)用来获取网页返回值
import urllib.request
#获取一个get请求
response = urllib.request.urlopen("http://www.baidu.com")
print(response.read().decode('utf-8')) #对获取到的网页源码进行utf-8解码
#获取一个post请求
import urllib.parse
data = bytes(urllib.parse.urlencode({"hello":"world"}),encoding="utf-8")
response = urllib.request.urlopen("http://httpbin.org/post",data= data)
print(response.read().decode("utf-8"))
urllib.request.Request()是用来发送请求的信息,这个是模拟自己浏览器的关键,headers一般都是放自己的user-agent,如果爬的网站信息需要登录,可以放入自己的cookie,别的不放也无妨
url = "https://www.douban.com"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36"
}
req = urllib.request.Request(url=url,headers=headers)
response = urllib.request.urlopen(req)
print(response.read().decode("utf-8"))