爬取知乎专栏

一.收集相关信息

1.找到需要爬取的专栏URL

https://zhuanlan.zhihu.com/(专栏名称)
比如:
https://zhuanlan.zhihu.com/lingkou-solution
爬取知乎专栏_第1张图片
1-1-1

2.找到请求URL, cookie, user-agent等信息

找到articles的URL, 缩略地址如下

https://zhuanlan.zhihu.com/api/columns/lingkou-solution/articles

完整地址如下

https://zhuanlan.zhihu.com/api/columns/lingkou-solution/articles?include=data%5B*%5D.admin_closed_comment%2Ccomment_count%2Csuggest_edit%2Cis_title_image_full_screen%2Ccan_comment%2Cupvoted_followees%2Ccan_open_tipjar%2Ccan_tip%2Cvoteup_count%2Cvoting%2Ctopics%2Creview_info%2Cauthor.is_following%2Cis_labeled%2Clabel_info
爬取知乎专栏_第2张图片
1-2-1

爬取知乎专栏_第3张图片
1-2-2

爬取知乎专栏_第4张图片
1-2-3

1-2-4

3.分析接口

根据完整的地址, 请求, 分析返回的数据
如图1-3-1, 根据key可以得到一些信息

"paging": {
        "is_end": false, // 是否是最后一页
        "totals": 39, // 该专栏文章总数
        // 上一页地址
        "previous": "https://zhuanlan.zhihu.com/columns/lingkou-solution/articles?include=data%5B%2A%5D.admin_closed_comment%2Ccomment_count%2Csuggest_edit%2Cis_title_image_full_screen%2Ccan_comment%2Cupvoted_followees%2Ccan_open_tipjar%2Ccan_tip%2Cvoteup_count%2Cvoting%2Ctopics%2Creview_info%2Cauthor.is_following%2Cis_labeled%2Clabel_info&limit=10&offset=0",
        "is_start": true, // 是否是第一页
        // 下一页地址
        "next": "https://zhuanlan.zhihu.com/columns/lingkou-solution/articles?include=data%5B%2A%5D.admin_closed_comment%2Ccomment_count%2Csuggest_edit%2Cis_title_image_full_screen%2Ccan_comment%2Cupvoted_followees%2Ccan_open_tipjar%2Ccan_tip%2Cvoteup_count%2Cvoting%2Ctopics%2Creview_info%2Cauthor.is_following%2Cis_labeled%2Clabel_info&limit=10&offset=10"
    },
爬取知乎专栏_第5张图片
1-3-1

如图1-3-2, data是文章列表, 一页有10个,我们要获取里面的id和title


爬取知乎专栏_第6张图片
1-3-2

二.开始爬取

1.代码准备

需要安装wkhtmltopdf + pdfkit
wkhtmltopdf要从官网下载, 如果是Windows使用, 还需要配置路径

https://wkhtmltopdf.org/downloads.html
config = pdfkit.configuration(wkhtmltopdf='wkhtmltopdf.exe 存在路徑')
pdfkit.from_url("目標網址", "輸出檔案", configuration=config)

pdfkit 是对此工具封装的 Python 库,可从 pip 安装:

pip install pdfkit
import requests
from requests import RequestException
from bs4 import BeautifulSoup
import pdfkit
import os
import lxml
import re
import time

CURRENT_FILE_PATH = os.path.dirname(os.path.abspath('__file__'))

2.准备URL, header,user-angent等

url = 'https://zhuanlan.zhihu.com/api/columns/lingkou-solution/articles?include=data%5B*%5D.admin_closed_comment%2Ccomment_count%2Csuggest_edit%2Cis_title_image_full_screen%2Ccan_comment%2Cupvoted_followees%2Ccan_open_tipjar%2Ccan_tip%2Cvoteup_count%2Cvoting%2Ctopics%2Creview_info%2Cauthor.is_following%2Cis_labeled%2Clabel_info'
cookie = '_xsrf=3bb33dbe-5749-4743-b897-e7aa515bf65a; _zap=53a6c2b5-1d4c-4a0e-81e3-8b5d56019c35; d_c0="AEChZA3T_g-PTn1jyfsKuj_apKrFA5GHFVs=|1567579015"; tgw_l7_route=66cb16bc7f45da64562a077714739c11'
user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
headers = {'cookie': cookie, 'user-agent': user_agent}

3.开始爬取, 获取所以的文章

def get_zhihu_data() -> list:
    array_list = []
    global url
    
    while True:
        try:
            resp = requests.get(url, headers=headers)
        except RequestException as error:
            print('get data error', error)
        else:
            if resp.status_code != 200:
                print('get data status_code error')
                break
            j = resp.json()
            data = j['data']
            for article in data:
                print(article.get('id'), article.get('title')) 
                info = {
                    'id': article.get('id'), 
                    'title': article.get('title'),
                }
                array_list.append(info)
            
            paging = j.get('paging')
            if paging['is_end']:
                break
            url = paging['next']
            url = url.replace('zhuanlan.zhihu.com', 'zhuanlan.zhihu.com/api')
        time.sleep(2)
        
        # 我只抓取第一页数据, 如要抓取所有, 注释掉break
        break
    return array_list

4. 访问每个文章主页, 保存到本地html

def save_data_html(array_list):
    index = 1
    for item in array_list:
        url = 'https://zhuanlan.zhihu.com/p/%s' % item['id']
        name = f'{index:03}' + '-' + item['title']
        while '/' in name:
            name = name.replace('/', '')
        html = requests.get(url, headers=headers).text

        soup = BeautifulSoup(html, 'lxml')
        content = soup.prettify()
        # content = soup.find(class_='Post-Main Post-NormalMain').prettify()
        content = content.replace('data-actual', '')
        content = content.replace('h1>', 'h2>')
        content = re.sub(r'', '', content)
        content = re.sub(r'src="data:image.*?"', '', content)
        # content = '

%s

%s' % (name, content) with open('%s.html' % name, 'w') as f: f.write(content) index += 1

三.把html转成PDF

def cover_html_to_pdf():
    file_list =  os.listdir(CURRENT_FILE_PATH)
    all_html_list = []
    for path in file_list:
        file_extension = os.path.splitext(path)[1]
        if file_extension == '.html':
            all_html_list.append(path)
    all_html_list.sort()
    print(all_html_list)

    pdfkit.from_file(all_html_list, 'zhihu.pdf')

转成后的效果如下图


爬取知乎专栏_第7张图片
3-1-1

完整代码地址

https://github.com/yangyu2010/Crossin-Day21/blob/master/Other/cross_zhihu.py

你可能感兴趣的:(爬取知乎专栏)