BeautifulSoup解析爬取三国演义文章

bs4进行数据解析
    - 数据解析的原理:
        - 1.标签定位
        - 2.提取标签、标签属性中存储的数据值
    - bs4数据解析的原理:
        - 1.实例化一个BeautifulSoup对象,并且将页面源码数据加载到该对象中
        - 2.通过调用BeautifulSoup对象中相关的属性或者方法进行标签定位和数据提取
    - 环境安装:
        - pip install bs4
        - pip install lxml
    - 如何实例化BeautifulSoup对象:
        - from bs4 import BeautifulSoup
        - 对象的实例化:
            - 1.将本地的html文档中的数据加载到该对象中
                    fp = open('./test.html','r',encoding='utf-8')
                    soup = BeautifulSoup(fp,'lxml')
            - 2.将互联网上获取的页面源码加载到该对象中
                    page_text = response.text
                    soup = BeatifulSoup(page_text,'lxml')
        - 提供的用于数据解析的方法和属性:
            - soup.tagName:返回的是文档中第一次出现的tagName对应的标签
            - soup.find():
                - find('tagName'):等同于soup.div
                - 属性定位:
                    -soup.find('div',class_/id/attr='song')
            - soup.find_all('tagName'):返回符合要求的所有标签(列表)
        - select:
            - select('某种选择器(id,class,标签...选择器)'),返回的是一个列表。
            - 层级选择器:
                - soup.select('.tang > ul > li > a'):>表示的是一个层级
                - oup.select('.tang > ul a'):空格表示的多个层级
        - 获取标签之间的文本数据:
            - soup.a.text/string/get_text()
            - text/get_text():可以获取某一个标签中所有的文本内容
            - string:只可以获取该标签下面直系的文本内容
        - 获取标签中属性值:
            - soup.a['href']
import time

import requests
import lxml
from bs4 import BeautifulSoup


if __name__ == '__main__':
    headers = {
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'
    }
    url = 'http://www.shicimingju.com/book/sanguoyanyi.html'
    response = requests.get(url=url, headers=headers)

    # 产生乱码,需要设置响应的编码方式为utf-8 (网页的编码方式)
    response.encoding = 'utf-8'
    print(response.status_code)
    page_text =response.text
    # page_text = page_text.encode('gbk').decode('utf-8', 'ignore')
    soup = BeautifulSoup(page_text, 'lxml')
    a_list = soup.select('.book-mulu a')    # 选择器,根据层级进行选择
    print(a_list)

    name_url_list = []  # 存储的是字典,分别保存章节详情url和章节名称
    for a in a_list:
        dic = {}
        dic['url'] = "https://www.shicimingju.com" + a['href']
        dic['name'] = a.string
        name_url_list.append(dic)
    print(name_url_list)
    fp = open("./三国演义.txt", 'w', encoding='utf-8')
    for dic in name_url_list:
      #  print("当前申请的url", dic['url'])
        response_detail = requests.get(url=dic['url'], headers=headers)
        response_detail.encoding = 'utf-8'    # 设置编码
        detail_text = response_detail.text
        detail_soup = BeautifulSoup(detail_text, 'lxml')
        div_tag = detail_soup.find('div', class_='chapter_content')
        detail = div_tag.text.strip()
        print(detail)
        fp.write(dic['name'])
        fp.write(detail)
        fp.write("\n\n")
      #  print(detail)

        time.sleep(2)
    fp.close()

你可能感兴趣的:(爬虫学习,beautifulsoup)