爬虫实战(1)| 爬取豆瓣电影评论

大家好,作为一名互联网行业的小白,写博客只是为了巩固自己学习的知识,但由于水平有限,博客中难免会有一些错误出现,有不妥之处恳请各位大佬指点一二!
博客主页:链接: https://blog.csdn.net/weixin_52720197?spm=1018.2118.3001.5343

1.分析网址

第一页:
爬虫实战(1)| 爬取豆瓣电影评论_第1张图片

第二页:
爬虫实战(1)| 爬取豆瓣电影评论_第2张图片
第三页:
爬虫实战(1)| 爬取豆瓣电影评论_第3张图片

发现start值增加20

2.数据获取

import requests
import pandas as pd
import re
import time
import csv
# 网页解析
from bs4 import BeautifulSoup
import os
# 请求网址
from urllib import request

# url请求文件头
header = {'Content-Type': 'text/html; charset=utf-8',
          'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36'}

# 登录cookies
Cookie = {'Cookie':'ll="118373"; bid=nwUzagXYPW0; __yadk_uid=6Acgns8MqIiVX8xdeR9XdvZpeZvyU70n; _vwo_uuid_v2=DA4FF1A9A542C586225D4F1CACD8FC1A2|a655a1bb3709fcf4e62213c6e574c19e; __gads=ID=bc89a11f4804e4d2-2224f02426c60020:T=1614221468:RT=1614221468:S=ALNI_MZuqSQCNFE-Xy2UlpFwBX1rX5jbnw; _vwo_uuid_v2=DA4FF1A9A542C586225D4F1CACD8FC1A2|a655a1bb3709fcf4e62213c6e574c19e; __utma=30149280.1421230719.1614221458.1617012037.1617347693.6; __utmc=30149280; __utmz=30149280.1617347693.6.5.utmcsr=baidu|utmccn=(organic)|utmcmd=organic; __utmc=223695111; _pk_ref.100001.4cf6=%5B%22%22%2C%22%22%2C1617347695%2C%22https%3A%2F%2Fwww.douban.com%2F%22%5D; _pk_ses.100001.4cf6=*; ap_v=0,6.0; __utmb=30149280.3.10.1617347693; __utma=223695111.243723590.1614221458.1617347695.1617347921.6; __utmb=223695111.0.10.1617347921; __utmz=223695111.1617347921.6.5.utmcsr=baidu|utmccn=(organic)|utmcmd=organic; _pk_id.100001.4cf6=4453b42a3b7f578e.1614221457.5.1617348789.1617007153.'}

# 构造请求网址
url_1 = "https://movie.douban.com/subject/34779692/comments?start="
'''
https://movie.douban.com/subject/34779692/comments?limit=20&status=P&sort=new_score
https://movie.douban.com/subject/34779692/comments?start=20&limit=20&status=P&sort=new_score
https://movie.douban.com/subject/34779692/comments?start=40&limit=20&status=P&sort=new_score
'''
url_2 = "&limit=20&sort=new_score&status=P"

# 循环抓取多页,循环变量为start,0,20,40...
i = 0

while True:
    # 拼接url
    # 当i=0时
    url = url_1+str(i*20)+url_2
    print(url)
    try:
        # request请求
        html = requests.get(url, headers=header, cookies=Cookie)
        # Beautifulsoup解析网址
        soup = BeautifulSoup(html.content, 'lxml')

        # 爬取的数据
        # 评论时间
        # 找span标签,找span标签中的class的comment-time
        comment_time_list = soup.find_all('span', attrs={'class': 'comment-time'})
        # 设置循环终止变量
        # 当评论为0时,就结束循环
        if len(comment_time_list) == 0:
            break
        # 评论用户名
        use_name_list = soup.find_all('span', attrs={'class': 'comment-info'})
        # 评论文本
        comment_list = soup.find_all('span', attrs={'class': 'short'})
        # 评分
        rating_list = soup.find_all('span', attrs={'class': re.compile(r"allstar(\s\w+)?")})

        for r in range(len(comment_time_list)):
            data1 = [
                (comment_time_list[r].string,
                 # 评论用户名,下的a标签,
                use_name_list[r].a.string,
                comment_list[r].string,
                rating_list[r].get('class')[0],
                rating_list[r].get('title'))
            ]
            data2 = pd.DataFrame(data1)
            # 存储为douban_movie.csv
            data2.to_csv('douban_movie.csv', header=False, index=False, mode='a+', encoding="utf-8-sig")
        print('page '+str(i+1)+' has done')
    except:
        # 如出现异常,则出现something is wrong
        print("something is wrong")
    i = i+1
    time.sleep(3)

爬虫实战(1)| 爬取豆瓣电影评论_第4张图片

你可能感兴趣的:(爬虫笔记,python,html)