实战爬虫抓取糗事百科段子(抓段子详情页)

实战爬虫抓取糗事百科段子(抓段子详情页)

  • 先抓取详情页链接,拼接成为正确地址
  • 抓取详情页数据,处理掉不需要的字符
  • 当正则写的结果不唯一时,通过切片获取需要的数据
# !/usr/bin/python
# Filename: 实战 糗事百科(抓详情页).py
# Data    : 2020/06/15
# Author  : --king--
# ctrl+alt+L自动加空格格式化


import requests
import re
import time


# 获取详情页面url
def detail_pages(url):
    headers = {
     
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}
    html = requests.get(url, headers=headers)
    text = html.text
    detail_pages = re.findall(r'''.+?''', text, re.VERBOSE | re.DOTALL)
    urls = []
    for detail_page in detail_pages:
        detail_page = 'https://www.qiushibaike.com' + detail_page
        urls.append(detail_page)
    return urls


# 解析详情页面获取joke
def parse_page(urls):
    for url in urls:
        headers = {
     
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}
        html = requests.get(url, headers=headers)
        text = html.text
        # 获取的有三个值,切片取第一个
        joke = re.findall(r'''.+?(.+?)
''', text, re.VERBOSE | re.DOTALL) # 法案先输出结果有表情链接<img src="xxx" align="yyy">,替换掉 joke = re.sub(r'
|| '
, '', joke[0]) # jokes = jokes[0].replace('
','')
print(joke) time.sleep(2) def main(): base_url = 'https://www.qiushibaike.com/text/page/{}/' for i in range(1, 11): page_url = base_url.format(i) detai_page_list = detail_pages(page_url) parse_page(detai_page_list) time.sleep(2) print(i) break if __name__ == '__main__': main()

你可能感兴趣的:(PYTHON爬虫,python)