Python的网页下载器:urllib2.urlopen

urllib2下载网页的三种方法:

''' Created on 2016-4-14 Python爬虫下载网页的三种方法 @author: developer '''

''' In Python 3.2, urllib2 is renamed urllib.request, and cookielib is renamed http.cookiejar. So you rename it as urllib.request and http.cookijar '''
import urllib.request
import http.cookiejar


print("第一种方法")  #最简洁的方法
url='http://www.baidu.com'
#直接请求
response1 = urllib.request.urlopen(url)
#获取状态码,如果是200表示获取成功
print(response1.getcode())      #打印状态码
print(len(response1.read()))    #打印内容的长度

print("第二种方法")
#创建Request对象
request = urllib.request.Request(url)
#添加数据
#request.add_data('a','1')
#添加http的header
request.add_header("User-Agent", "Mozilla/5.0")  #伪装成浏览器
#发送Request请求获取结果
response2 = urllib.request.urlopen(request)
print(response2.getcode())
print(len(response2.read()))

''' 网页访问 HTTPCookieProcessor:需要用户登录 ProxyHandler:需要代理 HTTPSHandler:Https加密访问 HTTPRedirectHandler:url存在相互自动跳转的关系 '''
print("第三种方法")
#创建cookie容器
cj = http.cookiejar.CookieJar()
#创建一个opener
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
#给urllib.request安装opener
urllib.request.install_opener(opener)
#使用带有cookie的urllib.request访问网页
response3 = urllib.request.urlopen(url)
print(response3.getcode())
print(cj)
print(response3.read())

你可能感兴趣的:(python,爬虫)