Python3爬虫入门之beautifulsoup库的使用

强调内容

BeautifulSoup

灵活又方便的网页解析库,处理高效,支持多种解析器。利用它不用编写正则表达式即可方便地实现网页信息的提取。

解析库

解析器 使用方法 优势 劣势
Python标准库 BeautifulSoup(markup, “html.parser”) Python的内置标准库、执行速度适中 、文档容错能力强 Python 2.7.3 or 3.2.2)前的版本中文容错能力差
lxml HTML 解析器 BeautifulSoup(markup, “lxml”) 速度快、文档容错能力强 需要安装C语言库**
lxml XML 解析器 BeautifulSoup(markup, “xml”) 速度快、唯一支持XML的解析器 需要安装C语言库
html5lib BeautifulSoup(markup, “html5lib”) 最好的容错性、以浏览器的方式解析文档、生成HTML5格式的文档 速度慢、不依赖外部扩展

基本使用

html = """
The Dormouse's story

The Dormouse's story

Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.prettify()) # 自动格式化代码 print(soup.title.string)

标签选择器

选择元素

html = """
The Dormouse's story

The Dormouse's story

Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.title) # 选择title标签 print(type(soup.title)) # print(soup.head) print(soup.p) # 选择段落标签,若有多个只返回第一个结果

获取名称

html = """
The Dormouse's story

The Dormouse's story

Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.title.name) # 获取名称

获取属性

html = """
The Dormouse's story

The Dormouse's story

Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.p.attrs['name']) # 获取p标签的属性 print(soup.p['name']) # 同上

获取内容

html = """
The Dormouse's story

The Dormouse's story

Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.p.string) # 获取内容 .string

嵌套选择

html = """
The Dormouse's story

The Dormouse's story

Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.head.title.string)

子节点和子孙节点

html = """

    
        The Dormouse's story
    
    
        

Once upon a time there were three little sisters; and their names were Elsie Lacie and Tillie and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.p.contents) # p标签的子孙
html = """

    
        The Dormouse's story
    
    
        

Once upon a time there were three little sisters; and their names were Elsie Lacie and Tillie and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(type(soup.p.children)) # 迭代器 print(soup.p.children) for i, child in enumerate(soup.p.children): print(i, child)
html = """

    
        The Dormouse's story
    
    
        

Once upon a time there were three little sisters; and their names were Elsie Lacie and Tillie and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.p.descendants) # 获取所有的子孙节点 for i, child in enumerate(soup.p.descendants): print(i, child)

父节点和祖先节点

html = """

    
        The Dormouse's story
    
    
        

Once upon a time there were three little sisters; and their names were Elsie Lacie and Tillie and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.a.parent) # 获取父节点
html = """

    
        The Dormouse's story
    
    
        

Once upon a time there were three little sisters; and their names were Elsie Lacie and Tillie and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(list(enumerate(soup.a.parents))) # 获取所有的祖先节点

兄弟节点

html = """

    
        The Dormouse's story
    
    
        

Once upon a time there were three little sisters; and their names were Elsie Lacie and Tillie and they lived at the bottom of a well.

...

"""
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(list(enumerate(soup.a.next_siblings))) # 下一个兄弟节点 print(list(enumerate(soup.a.previous_siblings))) # 上一个兄弟节点

标准选择器

find_all( name , attrs , recursive , text , **kwargs )

可根据标签名、属性、内容查找文档

name

html='''

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
'''
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.find_all('ul')) print(type(soup.find_all('ul')[0]))
html='''

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
'''
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') for ul in soup.find_all('ul'): # 两层嵌套,通过标签名查找 print(ul.find_all('li'))

attrs

html='''

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
'''
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.find_all(attrs={'id': 'list-1'})) # 通过标签的属性查找 print(soup.find_all(attrs={'name': 'elements'}))
html='''

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
'''
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.find_all(id='list-1')) # 同上,写法更简便 print(soup.find_all(class_='panel-heading')) # class_ 要加下划线不然冲突

text

根据文本内容进行选择

html='''

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
'''
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.find_all(text='Foo')) # 做内容匹配还是有用

find( name , attrs , recursive , text , **kwargs )

find返回单个元素,find_all返回所有元素

html='''

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
'''
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.find('ul')) print(type(soup.find('ul'))) print(soup.find('page'))

find_parents() find_parent()

find_parents()返回所有祖先节点,find_parent()返回直接父节点。

find_next_siblings() find_next_sibling()

find_next_siblings()返回后面所有兄弟节点,find_next_sibling()返回后面第一个兄弟节点。

find_previous_siblings() find_previous_sibling()

find_previous_siblings()返回前面所有兄弟节点,find_previous_sibling()返回前面第一个兄弟节点。

find_all_next() find_next()

find_all_next()返回节点后所有符合条件的节点, find_next()返回第一个符合条件的节点

find_all_previous() 和 find_previous()

find_all_previous()返回节点后所有符合条件的节点, find_previous()返回第一个符合条件的节点

CSS选择器

通过select()直接传入CSS选择器即可完成选择

html='''

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
'''
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') print(soup.select('.panel .panel-heading')) # 如果是class则加个小数点 . print(soup.select('ul li')) print(soup.select('#list-2 .element')) # #井号代替id print(type(soup.select('ul')[0]))
html='''

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
'''
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') for ul in soup.select('ul'): print(ul.select('li'))

获取属性

html='''

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
'''
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') for ul in soup.select('ul'): print(ul['id']) print(ul.attrs['id'])

获取内容

html='''

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
'''
from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'lxml') for li in soup.select('li'): print(li.get_text())

总结

  • 推荐使用lxml解析库,必要时使用html.parser
  • 标签选择筛选功能弱但是速度快
  • 建议使用find()、find_all() 查询匹配单个结果或者多个结果
  • 如果对CSS选择器熟悉建议使用select()
  • 记住常用的获取属性和文本值的方法

你可能感兴趣的:(爬虫)