Python:爬取二号首长

二号首长个人觉得还是很不错的一个官场小说,不过不论是电子版还是有声小说,都看过或听过一遍了,本文纯属为了锻炼爬虫基本功而做的实验。

有几个小经验可以分享一下:

CSS selector

文本内容都在下面这个节点内

所以使用

contents = soup.select("div.contentbox")

然后对其中每个项使用get_text()方法得到文本,再进行解码即可。

如何取得页面的title

requests没有现成的方法,我们要使用下面的代码来取得页面的title

from lxml.html import fromstring
import requests

first_chapter_content = requests.get(first_chapter_url)
tree = fromstring(first_chapter_content.content)
title = tree.findtext('.//title')
title = title.split(u'_官场言情_360小说网')[0]

这里split的目的主要是不要让标题太长,split冗余的文字。

如何把GBK内容转码成正常文字

另外一篇文章已经阐述了这个问题,这里再说一下

content_of_first_chapter = []
for content in contents:
     content = content.get_text().encode('latin-1').decode('gbk').encode('utf-8').replace("牋牋", "")
     content_of_first_chapter.append(content)

最后一个replace的目的是,在转码之后不知道为啥会多了那两个字,看起来很不顺眼,只能替换掉,后面有空再研究一下,为啥出了这个东东。

把list写入文本文件

with open(filename, "wb") as f:
     for content in content_of_first_chapter:
         f.write("{}\n".format(content))
 f.close()

全部代码

#!usr/bin/env  
# -*-coding:utf-8 -*-
import requests
import os
from bs4 import BeautifulSoup as BS
from lxml.html import fromstring
import sys

reload(sys)
sys.setdefaultencoding('utf-8')


sub_folder = os.path.join(os.getcwd(), "erhaoshouzhang")
if not os.path.exists(sub_folder):
    os.mkdir(sub_folder)

domain = 'http://www.zw360.com/zhangjie/2427/index.html'
new_domain = '/'.join(domain.split("/")[:-1])
web_data = requests.get(domain)

soup = BS(web_data.text, "lxml")
link_lists = soup.select('ul.chapter-list > li.chapter > a')

all_links = []
for link in link_lists:
    all_links.append(link.get('href'))

# #
for link_of_each_chapter in all_links:
    real_link_of_each_chapter = new_domain + "/" + link_of_each_chapter
    print real_link_of_each_chapter
    each_chapter = requests.get(real_link_of_each_chapter)
    soup = BS(each_chapter.text, "lxml")

    # Get content of each chapter
    contents = soup.select("div.contentbox")
    content_of_each_chapter = []
    for content in contents:
        content = content.get_text().encode('latin-1').decode('gbk').encode('utf-8').replace("牋牋", "")
        content_of_each_chapter.append(content)

    # Get title of each chapter
    tree = fromstring(each_chapter.content)
    title = tree.findtext('.//title')
    title = title.split(u'_官场言情_360小说网')[0]
    
    filename = sub_folder + "\\" + title + ".txt"
    print filename
    with open(filename, "wb") as f:
        for item in content_of_each_chapter:
            f.write("{}\n".format(item))
    f.close

你可能感兴趣的:(Python:爬取二号首长)