Python爬虫-Urlib库

headers的一些属性

User-Agent : 有些服务器或 Proxy 会通过该值来判断是否是浏览器发出的请求
Content-Type : 在使用 REST 接口时,服务器会检查该值,用来确定 HTTP Body 中的内容该怎样解析。
application/xml : 在 XML RPC,如 RESTful/SOAP 调用时使用
application/json : 在 JSON RPC 调用时使用application/x-www-form-urlencoded : 浏览器提交 Web 表单时使用在使用服务器提供的 RESTful 或 SOAP 服务时, Content-Type 设置错误会导致服务器拒绝服务

引入库

import urllib2
import urllib

post

#post
#参数
values = {"username":"123","password":"123"}
#编码
data = urllib.urlencode(values)
#api
url = "https://passport.csdn.net/account/login?from=http://my.csdn.net/my/mycsdn"
#请求
request = urllib2.Request(url,data)
#响应
response = urllib2.urlopen(request)
print response.read()

get

#get
values = {}
values["username"] = "123"
values["password"] = "123"
data = urllib.urlencode(values)
url = "https://passport.csdn.net/account/login?from=http://my.csdn.net/my/mycsdn"
geturl = url + "?" +data
request = urllib2.Request(geturl)
response = urllib2.urlopen(request)
print response.read()
print geturl

打开谷歌浏览器调试command+shift+c

设置Headers


# api
    url = "https://passport.csdn.net/account/login?from=http://my.csdn.net/my/mycsdn"
# user agent
    user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5)"
# header
    headers = {"User-Agent":user_agent}
# parameter
    values = {"username":"123","password":"123"}
    data = urllib.urlencode(values)
    request = urllib2.Request(url,data,headers)
    response = urllib2.urlopen(request)
    print response.read()

应付防盗链

headers = { 'User-Agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'  ,
                        'Referer':'http://www.zhihu.com/articles' }  

Proxy(代理)的设置

import urllib2
enable_proxy = True
proxy_handler = urllib2.ProxyHandler({"http" : 'http://some-proxy.com:8080'})
null_proxy_handler = urllib2.ProxyHandler({})
if enable_proxy:
    opener = urllib2.build_opener(proxy_handler)
else:
    opener = urllib2.build_opener(null_proxy_handler)
urllib2.install_opener(opener)

Timeout 设置
如果没有data

import urllib2
response = urllib2.urlopen('http://www.baidu.com', timeout=10)

如果有data

import urllib2
response = urllib2.urlopen('http://www.baidu.com',data, 10)

使用DebugLog

import urllib2
httpHandler = urllib2.HTTPHandler(debuglevel=1)
httpsHandler = urllib2.HTTPSHandler(debuglevel=1)
opener = urllib2.build_opener(httpHandler, httpsHandler)
urllib2.install_opener(opener)
response = urllib2.urlopen('http://www.baidu.com')

你可能感兴趣的:(Python爬虫-Urlib库)