目录
准备
获取Cookie等
获取目录标签
获取内容标签
代码
视频讲解
结果
参考
li标签里面包含a或b标签,章节的链接是小说的url加一个整数100536开始,依次递增,文章的标题可以通过字符串查找得到。
id为nr1的div块的内容就是本章小说内容
"""
--coding:utf-8--
@File: __init__.py.py
@Author:frank yu
@DateTime: 2021.01.06 9:24
@Contact: [email protected]
@Description:
"""
import random
import time
import requests
from bs4 import BeautifulSoup
tianguan = 'https://www.kunnu.com/tianguancifu/'
headers = {
'User-Agent': 'Mozilla/5.0 ArchLinux (X11; U; Linux x86_64; en-US) AppleWebKit/534.30 (KHTML, like Gecko) '
'Chrome/12.0.742.100',
'Cookie': '_ga=GA1.2.635907381.1607343598; _gid=GA1.2.910170177.1609896386; _gat_gtag_UA_16539659_3=1',
'Connection': 'keep-alive',
'referrer': 'https://www.kunnu.com/tianguancifu/'
}
# 获取目录
def get_contents(url, headers, start):
res = requests.get(url, headers=headers, timeout=30)
res.encoding = res.apparent_encoding
soup = BeautifulSoup(res.text, 'html.parser')
contents = soup.find_all('li')
contents.pop(0)
contents.pop(0)
res = []
for con in contents:
link = f'{url}{start}.htm'
begin = str(con).find('">') + 2
end = str(con).find("") if str(con).find("") == -1 else str(con).find("")
title = '第' + str(con)[begin:end].replace('、', '章')
res.append((link, title))
start += 1
# for i in res:
# print(i)
# print(len(res))
return res
# 爬取小说
def get_novel(contents, name):
name = name + '_爬取.txt'
with open(name, 'w', encoding='utf-8') as novel:
for con in contents:
link, title = con
while True:
try:
res = requests.get(link, headers=headers, timeout=60)
res.encoding = 'utf-8'
soup = BeautifulSoup(res.text, 'html.parser')
txt = soup.find(name='div', attrs={'id': 'nr1'})
novel.write('\n' + title + '\n')
novel.write(txt.text)
print(title + ' 已完成')
time.sleep(random.random() + 1)
break
except Exception as e:
print(e)
time.sleep(random.random() + 5)
print('全部爬取完毕, 生成' + name)
return name
# 文本处理
def deal(name):
new_name = name.split('_')[0] + '.txt'
with open(new_name, encoding='utf-8', mode='w') as newfile:
with open(name, encoding='utf-8', mode='r') as file:
for line in file:
index = line.find('鲲')
if index != -1:
# print(index)
newfile.write(line[:index - 1])
continue
newfile.write(line)
return new_name
if __name__ == '__main__':
con = get_contents(tianguan, headers, 100536)
novel = get_novel(con, '天官赐福')
爬取之后可以进行一些文本处理等
《天官赐福》,给爷爬!!!(简单爬虫实战)
我将txt文本导入了小说阅读APP(咪咕阅读),没有问题。
-----------------------2021年06月22日更新--------------------
这个网站改了下,以上代码不行了,大家可以试试笔趣阁
requests
BeautifulSoup
更多python相关内容:【python总结】python学习框架梳理
有问题请下方评论,转载请注明出处,并附有原文链接,谢谢!如有侵权,请及时联系。如果您感觉有所收获,自愿打赏,可选择支付宝18833895206(小于),您的支持是我不断更新的动力。