python爬虫学习小组 任务2

任务2.1 学习BeautifulSoup

英语生词本
parser n. 剖析器;
prettify v. 修饰;
sibling n. 兄弟,姐妹; [生] 同科,同属; [人] 氏族成员;

在cmd命令行窗口安装BeautifulSoup库:

pip install beautifulsoup4

如何使用BeautifulSoup

from bs4 import BeautifulSoup
soup = BeautifulSoup('

data

' , 'html.parser')

BeautifulSoup库的安装小测
演示HTML页面地址:http://python123.io/ws/demo.html

>>> r = requests.get("http://python123.io/ws/demo.html")
>>> r.text
'This is a python demo page\r\n\r\n

The demo python introduces several python courses.

\r\n

Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses:\r\nBasic Python and Advanced Python.

\r\n' >>> demo = r.text >>> from bs4 import BeautifulSoup >>> soup = BeautifulSoup(demo, "html.parser") >>> print(soup.prettify()) This is a python demo page

The demo python introduces several python courses.

Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses: Basic Python and Advanced Python .

>>>

2.1.1 BeautifulSoup库的基本元素

HTML文档 <==> 标签树 <==> BeautifulSoup类
即,BeautifulSoup类 对应一个HTML/XML文档的全部内容。

BeautifulSoup类有5种基本元素:

  1. Tag
    标签,最基本的信息组织单元,分别用<>标明开头和结尾
  2. Name
    标签的名字,

    ...

    的名字是'p',格式:.name
  3. Attributes
    标签的属性,字典形式组织,格式:.attrs
  4. NavigableString
    标签内非属性字符串,<>...中字符串,格式:.string
  5. Comment
    标签内字符串的注释部分,一种特殊的Comment类型

标签树的平行遍历: (上行遍历,下行遍历略)

for sibling in soup.a.next_siblings:
     print(sibling)   #遍历后续节点
for sibling in soup.a.previous_siblings:
     print(sibling)   #遍历前续节点

问题:如何让内容更加“友好”的显示?
---bs4库的prettify()方法

>>> print(soup.a.prettify())

 Basic Python

2.1.2 BeautifulSoup库实践案例

使用beautifulsoup提取丁香园论坛的回复内容

  1. 用户浏览器访问目标网站并检查目标内容所在标签

目标网址如下:http://www.dxy.cn/bbs/thread/626626#626626

用Chrome访问的,按F12可看见网站结构及回复内容所在标签如下图:


  1. 获取回复内容
    我们所需的评论内容就在td class="postbody"标签下,利用BeautifulSoup获取内容
content = data.find("td", class_="postbody").text

参考代码:

import urllib.request
from bs4 import BeautifulSoup as bs
def main():
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) "
                          "Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0"
    }
    url = 'http://www.dxy.cn/bbs/thread/626626'
    request = urllib.request.Request(url, headers=headers)
    response = urllib.request.urlopen(request).read().decode("utf-8")
    html = bs(response, 'lxml')
    getItem(html)
def getItem(html):
    datas = [] # 用来存放获取的用户名和评论
    for data in html.find_all("tbody"):
        try:
            userid = data.find("div", class_="auth").get_text(strip=True)
            print(userid)
            content = data.find("td", class_="postbody").get_text(strip=True)
            print(content)
            datas.append((userid,content))
        except:
            pass
    print(datas)



if __name__ == '__main__':
    main()

参考链接:https://blog.csdn.net/wwq114/article/details/88085875
说实话,没看懂。。。还要认真学习呀

任务2.1 学习xpath

要交作业了,来不及学了,待补充。。。

你可能感兴趣的:(python爬虫学习小组 任务2)