Python3.5.0爬虫基础urllib的使用

Get

urllib的request模块可以非常方便地抓取URL内容,也就是发送一个GET请求到指定的页面,然后返回HTTP的响应:

from urllib import request

with request.urlopen('http://www.baidu.com') as f:
data = f.read()
# 打印状态码看是否成功
print('status:',f.status,f.reason)
for k,v in f.getheaders():
    print('%s:%s' % (k,v))
print('data:',data.decode('utf-8'))

就能看到下面的打印的东西

Connection:close
Content-Type:text/html
Last-Modified:Mon Dec 12 14:35:49 2016
Vary:Accept-Encoding
Date:Mon Dec 12 14:35:49 2016
Cache-Control:no-cache
Content-Length:2090
data: ```


2.如果我们要想模拟浏览器发送GET请求,就需要使用Request对象,通过往Request对象添加HTTP头,我们就可以把请求伪装成浏览器。例如,模拟iPhone 6去请求豆瓣首页:

    from urllib import request

    # 创建request对象
    req = request.Request('http://www.douban.com/')
    # 添加请求头
    req.add_header('User-Agent', 'Mozilla/6.0 (iPhone; CPU iPhone OS 8_0       like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25')
    with request.urlopen(req) as f:
    data = f.read()
    # 打印状态码看是否成功
    print('status:',f.status,f.reason)
    for k,v in f.getheaders():
        print('%s:%s' % (k,v))
    print('data:',data.decode('utf-8'))

###POST
如果要以POST发送一个请求,只需要把参数data以bytes形式传入。

我们模拟一个微博登录,先读取登录的邮箱和口令,然后按照weibo.cn的登录页的格式以username=xxx&password=xxx的编码传入:

    from urllib import request,parse

      print('login to weibo.cn....')
    email = input('Email' )
    password = input('Password')
    # post 请求体,相当于oc中的httpBody
    login_data = parse.urlencode([('username',email),('password',password),('entry','mweibo')])

    # 创建requestduix
    req = request.Request('https://passport.weibo.cn/sso/login')

    # 设置请求头
    req.add_header('Origin', 'https://passport.weibo.cn')
    req.add_header('User-Agent', 'Mozilla/6.0 (iPhone; CPU iPhone OS 8_0   like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25')
    req.add_header('Referer', 'https://passport.weibo.cn/signin/login?entry=mweibo&res=wel&wm=3349&r=http%3A%2F%2Fm.weibo.cn%2F')

    with request.urlopen(req,data=login_data.encode('utf-8')) as f:
    print('status:',f.status,f.reason)

Handler

如果还需要更复杂的控制,比如通过一个Proxy去访问网站,我们需要利用ProxyHandler来处理,示例代码如下:

      proxy_handler = urllib.request.ProxyHandler({'http':     'http://www.example.com:3128/'})
    proxy_auth_handler = urllib.request.ProxyBasicAuthHandler()
    proxy_auth_handler.add_password('realm', 'host', 'username', 'password')
    opener = urllib.request.build_opener(proxy_handler, proxy_auth_handler)
    with opener.open('http://www.example.com/login.html') as f:
    pass
        



你可能感兴趣的:(Python3.5.0爬虫基础urllib的使用)