Python爬虫练习

Python爬虫练习_第1张图片

 python有哪几种网页下载器?

urllib2是官方基础模块,requests是第三方包,更强大。

urllib2下载网页方法:

1、最简洁方法:

url——>urllib2.urlopen(url);

2、添加data、http header

Python爬虫练习_第2张图片

import urllib2 

#创建Request对象
request = urllib2.Request(url)

#添加数据
request.add_data('a','1')
#添加http的header
request.add_header('User-Agent','Mozilla/5.0')

#发送请求获取结果
response = urllib2.urlopen(request)

 3、添加特殊情景的处理器

Python爬虫练习_第3张图片

 

import urllib2,cookielib

#创建cookie容器
cj = cookielib.CookieJar()

#创建一个opener
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))

#给urllib2安装opener
urllib2.install_opener(opener)

#使用带有cookie的urllib2访问网页
response = urllib2.urlopen("http://www.baidu.com/")

python3中urllib2变为urllib.request、cookielib变为from http import cookiejar

#coding:utf8

url = "https://www.baidu.com"

print '第一种方法'
response1 = urllib2.urlopen(url)
print response1.getcode()
print len(response1.read())

print '第二种方法'
request = urllib2.Request(url)
request.add_header("user-agent", "Mozilla/5.0")
response2 = urllib2.urlopen(url)
print response2.getcode()
print len(response2.read())

print '第三种方法'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
response3 = urllib2.urlopen(url)
print response3.getcode()
print cj
print len(response3.read())

 

网页解析器:从网页中提取有价值数据的工具。

Python爬虫练习_第4张图片

python有哪几种网页解析器?

Python爬虫练习_第5张图片

 beautiful是python第三方库,用于从HTML或xml中提取数据。

#coding:utf8
import re
from bs4 import BeautifulSoup

html_doc = """
The Dormouse's story

The Dormouse's story

Once upon a time there were three little sisters; and their names were Elsie, Lacie and Tillie; and they lived at the bottom of a well.

...

""" soup = BeautifulSoup(html_doc, 'html.parser', from_encoding='utf-8') print ("获取所有连接") links = soup.find_all('a') for link in links: print (link.name, link['href'], link.get_text()) print ("获取lacie连接") link_node = soup.find('a', href='http://example.com/lacie') print(link_node.name, link_node['href'], link_node.get_text()) print ("获取正则匹配") link_node = soup.find('a', href=re.compile(r"ill")) print(link_node.name, link_node['href'], link_node.get_text()) print ("获取p") p_node = soup.find('p', class_="title") print (p_node.name, p_node.get_text())

一个基于python的爬虫项目:通过url管理器、网页下载器、网页解析器等完成爬取百度百科1000个页面的数据

项目源码下载:https://github.com/strawqqhat/baike_spider

你可能感兴趣的:(Python)