使用requests与bs4爬取网站②

找出所有含特定标签的HTML元素

from bs4 import BeautifulSoup
html_sample = '\
\
\

Hello World

\ This is link1\ This is link2\ \ ' soup = BeautifulSoup(html_sample,'html.parser') #不加,'html.parser'将产生未使用剖析器的警告 print(soup.text)

使用select找出含有h1标签的元素

soup = BeautifulSoup(html_sample,'html.parser')
header = soup.select('h1')
print(header)
print(header[0])
print(header[0].text)
[h1 id="title">Hello World]
h1 id="title">Hello World
Hello World

使用select找出含有a标签的元素

soup = BeautifulSoup(html_sample,'html.parser')
alink = soup.select('a')
print(alink)
for link in alink:
    print(link)
for link in alink:
    print(link.text)
[This is link1,This is link2]
This is link1
This is link2
This is link1
This is link2

取得含特定CSS属性的元素

使用select找出所有id为title的元素(id前需要加#)

soup = BeautifulSoup(html_sample,'html.parser')
alink = soup.select('#title')
print(alink)
[

Hello World

]

使用select找出所有class为link的元素(class前需加.)

soup = BeautifulSoup(html_sample,'html.parser')
for link in soup.select('.link'):
    print(link)
This is link1
This is link2

取得所有a标签内的链接

使用select找出所有a标签的href链接

soup = BeautifulSoup(html_sample,'html.parser')
alink = soup.select('a')
for link in alink:
    print(link['href'])
#
#link2

例:

a = 'i am a link'
soup2 = BeautifulSoup(a,'html.parser')
print(soup2.select('a')[0]['qoo'])
print(soup2.select('a')[0]['abc'])
print(soup2.select('a')[0]['href'])
123
456
#

寻找CSS定位

开发人员工具(浏览器一般都含有)

LnfoLite(Chrome专用)

下载地址:https://chrome.google.com/webstore/detail/infolite/ipjbadabbpedegielkhgpiekdlmfpgal

时间字符串转换

from datetime import datetime

字符串转时间-strptime

dt = datetime.strptime(timesource,'%Y年%m月%d日%H时%M分%S秒)

时间转字符串-strftime

dt.strftime('%Y-%m-%d-%H-%M-%S)

例:

from datetime import datetime
timesource = '2018年3月20日17时25分10秒'
dt = datetime.strptime(timesource,'%Y年%m月%d日%H时%M分%S秒')
print(dt)
print(dt.strftime('%Y年%m月%d日%H时%M分%S秒'))
2018-03-20 17:25:10
2018年03月20日17时25分10秒

你可能感兴趣的:(使用requests与bs4爬取网站②)