爬取红楼梦全篇内容:
通过浏览器开发者工具中,我们得知。小说章节的标题,和每一张的链接,都在标签中。
现在我们用bs4把这2个数据提取出来。
import requests
from bs4 import BeautifulSoup
main_url = 'https://www.shicimingju.com/book/hongloumeng.html'
reponse = requests.get(url=main_url)
reponse.encoding = 'utf-8'
page_text = reponse.text
soup = BeautifulSoup(page_text, 'lxml')
定位到了标签,最好用层级标签:
a_list = soup.select('.book-mulu > ul > li')
章节标题:定位到的所有的符合要求的a标签
title = a.string
detail_url = 'https://www.shicimingju.com/' +a.a['href']
对详情页发起请求解析出章节内容:
reponse = requests.get(url=detail_url, headers=headers)
reponse.encoding = 'utf-8'
page_text_detail = reponse.text
分析后,我们可以看到,小说的篇幅内容都在 最后,我们需要保存下数据。 完整代码如下: 关注 Python涛哥,学习更多Python知识!div_tag = soup.find('div', class_="chapter_content")
content = div_tag.text
# 爬取红楼梦全篇内容:
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36'
}
main_url = 'https://www.shicimingju.com/book/hongloumeng.html'
reponse = requests.get(url=main_url)
reponse.encoding = 'utf-8'
page_text = reponse.text
print(page_text)
# 存储
f = open('hongloumeng.txt', 'a+', encoding='utf-8')
soup = BeautifulSoup(page_text, 'lxml')
a_list = soup.select('.book-mulu > ul > li')
for a in a_list:
title = a.string
detail_url = 'https://www.shicimingju.com/' + a.a['href']
# 对详情页发起请求解析出章节内容
reponse = requests.get(url=detail_url, headers=headers)
reponse.encoding = 'utf-8'
page_text_detail = reponse.text
soup = BeautifulSoup(page_text_detail, 'lxml')
div_tag = soup.find('div', class_="chapter_content")
content = div_tag.text
f.write(title + ':' + content + '\n')
print(title, '保存成功!!!')
f.close()