BeautifulSoup库详解

通过这篇文章为大家介绍崔庆才老师对Python爬虫BeautifulSoup库的讲解,包括基本原理及其理论知识点

本文共有约1200字,建议阅读时间10分钟,并且注重理论与实践相结合

觉得文章比较枯燥和用电脑观看的可以点击阅读原文即可跳转到CSDN网页


目录:

一、什么是BeautifulSoup

二、安装

三、BeautifulSoup用法详解




一、什么是BeautifulSoup

灵活又方便的网页解析库,处理高效,支持多种解析器。利用它不用编写正则表达式即可方便实现网页信息的提取。


二、安装

pip install beautifulsoup4

三、BeautifulSoup用法详解

  1. 解析库


    BeautifulSoup库详解_第1张图片

  2. 基本使用


  3. html = '''
    The Domouse's story
    
    

    The Dormouse's story

    Once upon a time there were little sisters;and their names were Lacleand Tillie and they lived at bottom of a well.

    ...

    ''' from bs4 import BeautifulSoup soup= BeautifulSoup(html,'lxml') print(soup.prettify())#格式化代码,打印结果自动补全缺失的代码 print(soup.title.string)#文章标题
  4. 标签选择器


  5. #选择元素
    html = '''
    The Domouse's story
    
    

    The Dormouse's story

    Once upon a time there were little sisters;and their names were #选择元素 html = ''' The Domouse's story

    The Dormouse's story

    Once upon a time there were little sisters;and their names were Lacleand Tillie and they lived at bottom of a well.

    ...

    ''' soup = BeautifulSoup(html,'lxml') print(soup.title) The Domouse's story print(type(soup.title)) print(soup.head) The Domouse's story print(soup.p)#当出现多个时,只返回了一个

    The Dormouse's story

    #获取名称 print(soup.title.name) #获取内容 print(soup.p.attrs['name']) dromouse print(soup.p['name']) dromouse soup = BeautifulSoup(html,'lxml') print(soup.title) The Domouse's story print(type(soup.title)) print(soup.head) The Domouse's story print(soup.p)#当出现多个时,只返回了一个

    The Dormouse's story

    #获取名称 print(soup.title.name) #获取属性 print(soup.p.attrs['name']) dromouse print(soup.p['name']) dromouse #获取内容 print(soup.p.string)
    The Dormouse's story
  6. 嵌套内容


  7. #嵌套选择
    from bs4 import BeautifulSoup
    
    
    html = '''
    The Domouse's story
    
    

    The Dormouse's story

    Once upon a time there were little sisters;and their names were Lacleand Tillie and they lived at bottom of a well.

    ...

    ''' soup = BeautifulSoup(html,'lxml') print(soup.head.title.string)#观察html的代码,其中有一层包含的关系:head(title),那我们就可以用嵌套的形式将其内容打印出来;body(p或是a) The Domouse's story #子节点和子孙节点 html2 = ''' The Domouse's story

    Once upon a time there were little sisters;and their names were Elsle Lacle and Tillie and they lived at bottom of a well.

    ...

    ''' soup2 = BeautifulSoup(html2,'lxml') print(soup2.p.contents) print(soup2.children)#不同之处:children实际上是一个迭代器,需要用循环的方式才能将内容取出 for i,child in enumerate(soup2.p.children): print(i,child) print(soup2.p.descendants)#获取所有的子孙节点,也是一个迭代器 for i,child in enumerate(soup2.p.descendants): print(i,child) #父节点和祖先节点 print(soup2.a.parent) print(list(enumerate(soup2.a.parents)))#所有祖先节点(爸爸也算) #兄弟节点(与之并列的节点) print(list(enumerate(soup2.a.next_siblings)))#后面的兄弟节点 print(list(enumerate(soup2.a.previous_siblings)))#前面的兄弟节点
  8. 标准选择器


  9. find_all(name,attrs,recursive,text,**kargs)
    
    #可根据签名、属性、内容查找文档


  10. #name
    from bs4 import BeautifulSoup
    
    html = '''
    

    Hello

    • Foo
    • Bar
    • Jay
    • Foo
    • Bar
    ''' soup = BeautifulSoup(html,'lxml') print(soup.find_all('ul'))#列表类型 print(type(soup.find_all('ul')[0])) for ul in soup.find_all('ul'): print(ul.find_all('li'))#层层嵌套的查找 #attrs用法 print(soup.find_all(attrs={'id':'list-1'}))

    print(soup.find_all(attrs={'name':'elements'})) print(soup.find_all(class_="element"))


  11. #text
    html = '''
    

    Hello

    • Foo
    • Bar
    • Jay
    • Foo
    • Bar
    ''' from bs4 import BeautifulSoup soup = BeautifulSoup(html,'lxml') print(soup.find_all(text='Foo'))


  12. #find(name,attrs,recursive,text,**kwargs)返回单个元素,find_all返回所有元素
    html = '''
    

    Hello

    • Foo
    • Bar
    • Jay
    • Foo
    • Bar
    ''' from bs4 import BeautifulSoup print(soup.find('ul')) print(type(soup.find('ul'))) print(soup.find('page'))

    find_parents():返回所有祖先节点和find_parent()返回直接父节点

    find_next_siblings()返回所有兄弟节点和find_next_sibling()返回后面第一个兄弟节点

    find_previous_siblings():返回前面的所有兄弟节点和find_previous_sibling():返回前面第一个兄弟节点

    find_all_next():返回节点后所有符合条件的节点和find_next():返回第一个符合条件的节点

    find_all_previous()返回节点前所有符合条件的节点和find_previous():返回节点前第一个符合条件的节点

  13. .CSS选择器

    通过select()直接传入CSS选择器即可完成选择


  14. #CSS选择器
    html = '''
    

    Hello

    • Foo
    • Bar
    • Jay
    • Foo
    • Bar
    ''' from bs4 import BeautifulSoup soup = BeautifulSoup(html,'lxml') print(soup.select('.panel .panel-heading')) #class就需要加一个“.” print(soup.select('ul li') #选择标签 print(soup.select('#list-2 .element')) print(type(soup.select('ul')[0])) for ul in soup.select('ul'):#直接print(soup.select('ul li')) print(ul.select('li'))
  15. 获取属性


  16. html = '''
    

    Hello

    • Foo
    • Bar
    • Jay
    • Foo
    • Bar
    ''' from bs4 import BeautifulSoup soup = BeautifulSoup(html,'lxml') for ul in soup.select('ul'):
        print(ul['id'])#直接用[]
        print(ul.attrs['id'])#或是attrs+[]
  17. 获取内容


  18. html = '''
    

    Hello

    • Foo
    • Bar
    • Jay
    • Foo
    • Bar
    ''' from bs4 import BeautifulSoup soup = BeautifulSoup(html,'lxml') for li in soup.select('ul'):
        print(li.get_text())

四、总结

  1. 推荐使用'lxml'解析库,必要时使用html.parser

  2. 标签选择器筛选功能但速度快

  3. 建议使用find(),find_all()查询匹配单个结果或者多个结果

  4. 如果对CSS选择器熟悉建议选用select()

  5. 记住常用的获取属性和文本值得方法

BeautifulSoup库详解_第2张图片

你可能感兴趣的:(Python学习)