使用浏览器查看http://blog.csdn.net/bagboy_taobao_com/article/month/2013/10 的HTML并保存为list.html(保存的格式必须为UTF8, 否则会乱码). 双击打开Index.html, 可以正确显示. OK, 可以用文本打开分析
<span class="link_title"><a href="/bagboy_taobao_com/article/details/13092655"> <span class="link_title"><a href="/bagboy_taobao_com/article/details/13092605"> <span class="link_title"><a href="/bagboy_taobao_com/article/details/13092535"> ...... <span class="link_title"><a href="/bagboy_taobao_com/article/details/12646185">
这些HTML描述了http://blog.csdn.net/bagboy_taobao_com/article/month/2013/10 中所包含的文章的UML
1. 规律必须是 标签span, class="link_title", 和 href="/bagboy_taobao_com/article/details/...
2. 然后自己在合成完整的UML, 例如: http://blog.csdn.net/bagboy_taobao_com/article/details/13092655
3. 由于还有其他标签包含了/bagboy_taobao_com/article/details/13092655, 共有3个地方出现, 所以要避免重复.
(你可以在分析HTML时就避免重复, 也可以分析完HTML后, 再把重复的删除掉,.)
#!/usr/bin/env python # Python 2.7.3 # 获取博客文章 # File: GetArticleList.py import urllib2 import HTMLParser import httplib # from HTMLParser import HTMLParser class CHYGetArticleList(HTMLParser.HTMLParser): def __init__(self, list): ''' ''' HTMLParser.HTMLParser.__init__(self) # 调用父类的构造函数, 这里调用时有self的 self.ok = False self.list = list def handle_starttag(self, tag, attrs): if False == self.ok and "span" == tag and 1 == len(attrs) and "class" == attrs[0][0] and "link_title" == attrs[0][1]: self.ok = True elif True == self.ok and "a" == tag and 1 == len(attrs) and "href" == attrs[0][0] and attrs[0][1].startswith("/bagboy_taobao_com/article/details/"): self.ok = False self.list.append(attrs[0][1]) ''' # http://blog.csdn.net/bagboy_taobao_com/article/month/2013/10 # 测试代码 if __name__ == '__main__': conn = httplib.HTTPConnection("blog.csdn.net") # 要模拟成IE发送, 否则CSDN不接受Python的请求 user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' headersP = { 'User-Agent' : user_agent } conn.request(method = "GET", url = "/bagboy_taobao_com/article/month/2013/10", headers = headersP) r1 = conn.getresponse() # 获得响应 htmlByte = r1.read() # 获得HTML htmlStr = htmlByte.decode("utf8") # 需要转换成utf8编码, 否则分析异常 list = [] my = CHYGetArticleList(list) my.feed(htmlStr) print(list) '''
说明:
1. 由于HTMLParser是顺序解析的, 在分析<span class="link_title"><a href="/bagboy_taobao_com/article/details/13092655">
的时候, 当进入<span 时, 你并不知道后面是不是<a 的, 或者说当你进入<a时, <span已经被解析了, 所以在进入<a时, 你需要一个标记来记录是否已经解析过<span 从而确定<span class="link_title"><a href="/bagboy_taobao_com/article/details/13092655">, 因为整个HTML中还有其他<a的.
一个分类或者一个存档的文章列表可能会有多页的
http://blog.csdn.net/bagboy_taobao_com/article/month/2010/05
图2
HTML是这样的
<div id="papelist" class="pagelist"> <span> 25条数据 共2页</span> <strong>1</strong> <a href="/cay22/article/month/2010/05/2">2</a> <a href="/cay22/article/month/2010/05/2">下一页</a> <a href="/cay22/article/month/2010/05/2">尾页</a> </div>
一般文章数超过20时会产生分页, 后面再处理.
使用浏览器查看view-source:http://blog.csdn.net/bagboy_taobao_com/article/details/5582868 的HTML并保存为article.html (保存的格式必须为UTF8, 否则会乱码). 双击打开article.html, 可以正确显示. OK, 可以用文本打开分析
1. 文章的标题
<span class="link_title"><a href="/bagboy_taobao_com/article/details/5582868"> 递归目录的所有文件 </a></span>
这个简单
2. 文章的内容
<div id="article_content" class="article_content"> 文章内容 </div>
在文章内容中可能还有其他的标签, 怎么记录是需要考虑的.
这里不考虑标题和文章内容中的任何其他标签, 转义字符等, 也就是说提取出来的内容是纯文本的, 没有任何HTML格式.
#!/usr/bin/env python # coding=utf-8 #默认编码格式为utf-8 # Python 2.7.3 # 获取博客文章 # File: GetArticle.py import urllib2 import HTMLParser import httplib # from HTMLParser import HTMLParser import codecs class CHYGetArticle(HTMLParser.HTMLParser): def __init__(self): ''' ''' HTMLParser.HTMLParser.__init__(self) # 调用父类的构造函数, 这里调用时有self的 self.title_ok = 0 self.content_ok = 0 self.title = "" self.comment = "" self.f = open('data.txt', 'w') # 写入文件(你使用浏览器打开这个文件看看) def __del__(self): self.f.close() def handle_starttag(self, tag, attrs): if 0 == self.title_ok and "span" == tag and 1 == len(attrs) and "class" == attrs[0][0] and "link_title" == attrs[0][1]: self.title_ok = 1 elif 0 == self.content_ok and "div" == tag and 2 == len(attrs) and "article_content" == attrs[0][1]: self.content_ok = 1 def handle_data(self, data): if 1 == self.title_ok: self.title = data self.title_ok = 2 elif 1 == self.content_ok: self.comment = self.comment + data def handle_endtag(self, tag): if 1 == self.content_ok and "div" == tag: self.content_ok = 2 ''' # http://blog.csdn.net/bagboy_taobao_com/article/details/5582868 # 测试代码 if __name__ == '__main__': conn = httplib.HTTPConnection("blog.csdn.net") # 要模拟成IE发送, 否则CSDN不接受Python的请求 user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' headersP = { 'User-Agent' : user_agent } conn.request(method = "GET", url = "/bagboy_taobao_com/article/details/5582868", headers = headersP) r1 = conn.getresponse() # 获得响应 htmlByte = r1.read() # 获得HTML htmlStr = htmlByte.decode("utf8") # 需要转换成utf8编码, 否则分析异常 my = CHYGetArticle() my.feed(htmlStr) print >> my.f, my.title.encode("utf8") print >> my.f, my.comment.encode("utf8") '''
遇到的异常情况:
1. 编码方式很乱, open默认创建的文件是ASCII的, 然后HTMLParser又需要转换成UTF8, 很乱, 要专门看一下关于Python的编码方式的知识.
2. 提取到的内容没有排版的, 标题想去除两端的空白不知怎么去除(字符串操作)