课程作业
- 选择第二次课程作业中选中的网址
- 爬取该页面中的所有可以爬取的元素,至少要求爬取文章主体内容
- 可以尝试用lxml爬取
作业网址
http://www.jianshu.com/p/e0bd6bfad10b
网页爬取
分别用Beautiful Soup和lxml做了爬取:
- 主页面所有链接,写到 _all_links.txt文件
- 分别抓取各链接,获取文章主体内容和title, 并保存主体内容到以title命名
的文件 - 对于无title或无主体内容的链接,将url写到Title_Is_None.txt文件中
最后的输出图:
框架结构:
spidercomm.py: 定义一些公用函数,如写文件,下载页面等,这些函数独立于实际使用的爬取方式
spiderbs4.py: 定义用BeautifulSoup实现需要的一些函数,如爬取链接,爬取页面内容
spiderlxml.py: 定义用lxml实现需要的一些函数,如爬取链接,爬取页面内容
bs4.py: 用BeautifulSoup实现的爬取客户端
lxml.py: 用lxml实现的爬取客户端
文件夹结果图:
BeautifulSoup 实现
BeautifulSoup实现代码
1 导入 spidercomm 和 BeautifulSoup 模块
# -*- coding: utf-8 -*-
import spidercomm as common
from bs4 import BeautifulSoup
2 使用BeautifulSoup 找到所有tag a,spidercomm 模块的爬取所有链接的方法会用到其返回值。
# get all tags a from a single url
def a_links(url_seed,attrs={}):
html = common.download(url_seed)
soup = BeautifulSoup(html,'html.parser')
alinks= soup.find_all('a',attrs)
return alinks
3 使用BeautifulSoup 爬取某个url的页面,主要关注title和文章主体。
def crawled_page(crawled_url):
html = common.download(crawled_url)
soup = BeautifulSoup(html,'html.parser')
title = soup.find('h1',{'class':'title'})
if title== None:
return "Title_Is_None",crawled_url
content = soup.find('div',{'class':'show-content'})
if content == None:
return title.text, "Content_Is_None"
return title.text,content.text
4 判断是否分页
def isMultiPaged(url):
html_page1 = common.download(url % 1)
soup = BeautifulSoup(html_page1,'html.parser')
body1 = soup.find('body')
body1.script.decompose()
html_page2 = common.download(url % 2)
if html_page2 == None:
return False
soup = BeautifulSoup(html_page2,"html.parser")
body2 = soup.find('body')
#print [x.extract() for x in body2.findAll('script') ]
body2.script.decompose()
if str(body1) == str(body2):
return False
else:
return True
5 获取所有分页数
def getNumberOfPages(url):
count = 1
flag = True
if (isMultiPaged(url)):
while flag:
url= url % count
# print "url: %s" % url
count += 1
html = common.download(url)
if html==None:
break
return count
BeautifulSoup客户端代码
1 导入spidercomm 和 spiderbs4 模块
# -*- coding: utf-8 -*-
import os
import spiderbs4 as bs4
import spidercomm as common
2 设置所需的变量,创建输出结果路径
# set up
url_root = 'http://www.jianshu.com/'
url_seed = 'http://www.jianshu.com/p/e0bd6bfad10b?page=%d'
spider_path='spider_res/bs4/'
if os.path.exists(spider_path) == False:
os.makedirs(spider_path)
3 调用spidercomm的getNumberOfPages方法,判断要爬取的页面是否分页
# get total number of pages
print "url %s has multiple pages? %r" % (url_seed,bs4.isMultiPaged(url_seed))
page_count = bs4.getNumberOfPages(url_seed)
print "page_count is %s" % page_count
4 调用spidercomm的to_be_crawled_links方法,对所要爬取的页面获取所有链接,如果有分页,则分别获取各页并归并获取的链接,并最终将所有链接写入_all_links.txt文件
# get all links to be crawled and write to file
links_to_be_crawled=set()
for count in range(page_count):
links = common.to_be_crawled_links(bs4.a_links(url_seed % count),count,url_root,url_seed)
print "Total number of all links is %d" % len(links)
links_to_be_crawled = links_to_be_crawled | links
with open(spider_path+"_all_links.txt",'w+') as file:
file.write("\n".join(unicode(link).encode('utf-8',errors='ignore') for link in links_to_be_crawled))
5 循环所获取的链接列表,爬取每个链接的页面title和页面文章内容,分别写入以title命名的文件中,如无title,则写入Title_Is_None.txt文件中。
# capture desired contents from crawled_urls
if len(links_to_be_crawled) >= 1:
for link in links_to_be_crawled:
title,content=bs4.crawled_page(link)
# print "title is %s" % title
file_name = spider_path + title +'.txt'
common.writePage(file_name,content)
lxml 实现
lxml 实现代码
1 导入lxml模块
# -*- coding: utf-8 -*-
import spidercomm as common
import urlparse
from lxml import etree
2 使用lxml找到所有tag a,spidercomm 模块的爬取所有链接的方法会用到其返回值。
# get all tags a from a single url
def a_links(url_seed,attrs={}):
html = common.download(url_seed)
tree = etree.HTML(html)
alinks= tree.xpath("//a")
return alinks
3 调用spidercomm的getNumberOfPages方法,判断要爬取的页面是否分页
def crawled_page(crawled_url):
html = common.download(crawled_url)
tree = etree.HTML(html)
title= tree.xpath("/html/body/div[1]/div[1]/div[1]/h1")
if title == None or len(title) == 0:
return "Title_Is_None",crawled_url
contents = tree.xpath("/html/body/div[1]/div[1]/div[1]/div[2]/*")
if contents == None or len(contents) ==0:
return title.text, "Content_Is_None"
content = ''
for x in contents:
if (x.text != None):
content = content + x.xpath('string()')
return title[0].text,content
4 判断是否分页
def isMultiPaged(url):
html_page1 = common.download(url % 1)
tree = etree.HTML(html_page1)
xp1 = tree.xpath("/html/body/div[1]/div[1]/div[1]/div[2]/*")
xp1 = ",".join(x.text for x in xp1)
html_page2 = common.download(url % 2)
if html_page2 == None:
return False
tree = etree.HTML(html_page2)
xp2 = tree.xpath("/html/body/div[1]/div[1]/div[1]/div[2]/*")
xp2 = ",".join(x.text for x in xp2)
if xp1 == xp2:
return False
else:
return True
5 获取所有分页数
def getNumberOfPages(url):
count = 1
flag = True
if (isMultiPaged(url)):
while flag:
url= url % count
print "url: %s" % url
count += 1
html = common.download(url)
if html==None:
break
return count
lxml 客户端代码
同前面的bs4客户端,只是导入的是spiderlxml模块及调用。
公用方法代码
1 导入所需模块
# -*- coding: utf-8 -*-
import urllib2
import time
import urlparse
2 下载页面
def download(url,retry=2):
# print "downloading %s" % url
header = {
'User-Agent':'Mozilla/5.0'
}
try:
req = urllib2.Request(url,headers=header)
html = urllib2.urlopen(req).read()
except urllib2.HTTPError as e:
print "download error: %s" % e.reason
html = None
if retry >0:
print e.code
if hasattr(e,'code') and 500 <= e.code < 600:
print e.code
return download(url,retry-1)
time.sleep(1)
return html
3 将爬取内容写入文件
def writePage(filename,content):
content = unicode(content).encode('utf-8',errors='ignore')+"\n"
if ('Title_Is_None.txt' in filename):
with open(filename,'a') as file:
file.write(content)
else:
with open(filename,'wb+') as file:
file.write(content)
4 获取单一url的所有外链接
# get urls to be crawled
#:param alinks: list of tag 'a' href, dependent on implementation eg. bs4,lxml
def to_be_crawled_link(alinks,url_seed,url_root):
links_to_be_crawled=set()
if len(alinks)==0:
return links_to_be_crawled
print "len of alinks is %d" % len(alinks)
for link in alinks:
link = link.get('href')
if link != None and 'javascript:' not in link:
if link not in links_to_be_crawled:
realUrl = urlparse.urljoin(url_root,link)
links_to_be_crawled.add(realUrl)
return links_to_be_crawled
5 获取指定分页的所有外连接
def to_be_crawled_links(alinks,count,url_root,url_seed):
url = url_seed % count
links = to_be_crawled_link(alinks,url_root,url)#,{'class':'title'})
links.add(url)
return links
结语:
实现的还很粗糙,抓取的内容也很简单,希望和大家一起讨论,并进一步完善框架。
两种实现爬取下来的页面内容似乎差不多,格式有些差别,bs4的要好一些,可能是自己代码没处理好,需要再研究下,以后要完善的地方还好多,作业先提交吧。