正则表达式解析抓取猫眼电影Top100

猫眼电影提供实时票房数据,这个以后玩

榜单规则:将猫眼电影库中的经典影片,按照评分和评分人数从高到低综合排序取前100名,每天上午10点更新。相关数据来源于“猫眼电影库”。


第一步,分析URL,一共有10页,每页10个,观察URL得

http://maoyan.com/board/4?offset=0 最后一个数字为增量,每次加10,第一页为0

#构造10页的地址
base_url = 'http://maoyan.com/board/4?offset={}'
urls = []
for i in range(10):
    urls.append(base_url.format(10*i))

第二步,分析单个页面

不加headers访问被禁止了,说是恶意访问

#构造headers
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36',
    'Cookie':'__mta=251008569.1536744988778.1536745503768.1536745544944.17; _lxsdk_cuid=165cd22f1a0c8-0ec18267a3a2f5-3c604504-1fa400-165cd22f1a0c8; uuid_n_v=v1; uuid=53511D10B66F11E894F593DFAB82C37F1716AA64EB0848C2BADBC75AA2E23EA6; _csrf=5f3373e2e85bd09c75c54ffbc624db254862d2ceb2dfbeff81eb9fa58e289001; __guid=17099173.3022114119644780000.1536744988414.424; _lx_utm=utm_source%3DBaidu%26utm_medium%3Dorganic; _lxsdk=53511D10B66F11E894F593DFAB82C37F1716AA64EB0848C2BADBC75AA2E23EA6; __mta=251008569.1536744988778.1536744999954.1536745003054.4; monitor_count=17; _lxsdk_s=165cd22f1a1-995-e6-849%7C%7C47'
}
def get_onepage(url):
    response = requests.get(url,headers=headers).text
    index = re.findall('board-index.*?>(.*?)<',response,re.S)[1:-1]
    name = re.findall('

,response,re.S) star = re.findall('(.*?)

'
,response,re.S) date = re.findall('releasetime">(.*?),response,re.S) img = re.findall('dd>.*?',response,re.S) score = re.findall('score.*?integer">(.*?)<.*?fraction">(.*?)<',response,re.S) name,star,date,img,score = list(name),list(star),list(date),list(img),list(score) #star和score需要处理一下 stars = [] for i in star: stars.append(i.split()) scores = [] for i,j in score: scores.append(i+j) for i in range(10): all_dict[index[i]] = {'index':index[i],'name':name[i],'star':stars[i],'img':img[i],'date':date[i]}

最后将score去掉不存,因为第一次成功了,但是今天尽然龙猫那部没有评分了,一页9个就会报错,img的查找有些麻烦,试了好几次,测试中发现img的属性标签会变化位置,与浏览器中看到的顺序不一致。

最后完整代码,爬取10页数据,并将数据打印出来,并写入json文件,重新读取

import requests,re
import json

base_url = 'http://maoyan.com/board/4?offset={}'
urls = []
for i in range(10):
    urls.append(base_url.format(10*i))

#构造headers
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36',
    'Cookie':'__mta=251008569.1536744988778.1536745503768.1536745544944.17; _lxsdk_cuid=165cd22f1a0c8-0ec18267a3a2f5-3c604504-1fa400-165cd22f1a0c8; uuid_n_v=v1; uuid=53511D10B66F11E894F593DFAB82C37F1716AA64EB0848C2BADBC75AA2E23EA6; _csrf=5f3373e2e85bd09c75c54ffbc624db254862d2ceb2dfbeff81eb9fa58e289001; __guid=17099173.3022114119644780000.1536744988414.424; _lx_utm=utm_source%3DBaidu%26utm_medium%3Dorganic; _lxsdk=53511D10B66F11E894F593DFAB82C37F1716AA64EB0848C2BADBC75AA2E23EA6; __mta=251008569.1536744988778.1536744999954.1536745003054.4; monitor_count=17; _lxsdk_s=165cd22f1a1-995-e6-849%7C%7C47'
}
def get_onepage(url):
    response = requests.get(url,headers=headers).text
    index = re.findall('board-index.*?>(.*?)<',response,re.S)[1:-1]
    name = re.findall('

,response,re.S) star = re.findall('(.*?)

'
,response,re.S) date = re.findall('releasetime">(.*?),response,re.S) img = re.findall('dd>.*?',response,re.S) score = re.findall('score.*?integer">(.*?)<.*?fraction">(.*?)<',response,re.S) name,star,date,img,score = list(name),list(star),list(date),list(img),list(score) #star和score需要处理一下 stars = [] for i in star: stars.append(i.split()) scores = [] for i,j in score: scores.append(i+j) for i in range(10): all_dict[index[i]] = {'index':index[i],'name':name[i],'star':stars[i],'img':img[i],'date':date[i]} all_dict = {} for i in urls: get_onepage(i) for i in all_dict.items(): print(i) with open('maoyan.json','w',encoding='utf8') as f: json.dump(all_dict,f) with open('maoyan.json','r',encoding='utf8') as f: print(json.load(f))

最后,打开json看是没有utf8转码的,不知是编辑器的事吗?

这里写图片描述

在代码里读文件正常

这里写图片描述

json在线格式解析也正常

正则表达式解析抓取猫眼电影Top100_第1张图片

你可能感兴趣的:(爬虫)