爬虫:爬取斗鱼房间信息

本文是我第二篇爬虫实战的代码,主要是为了熟悉存入csv文件的方法以及循环输出打印状态,也很简单,供学习用

import requests
import json
import time
import csv

# csv表头
def create_csv_headers():
    head = ['PlayerName','audience','game']
    with open('douyu.csv','a',newline='',encoding='utf-8')as f:
        writer = csv.writer(f)
        writer.writerow(head)

def save_to_csv(items):
    with open('douyu.csv','a',newline='',encoding='utf-8')as f:
        writer = csv.writer(f)
        writer.writerow(items)

def parse_page_detail(response):
    html_json = json.loads(response.text)
#   从返回的json数据中提取出需要的数据
    infos = html_json['data']['rl']
    for info in infos:
        PlayerName = info['rn']
        audience = info['ol']
        game = info['c2name']
        items = [PlayerName,audience,game]
        save_to_csv(items)

def parse_page_list():
    header = {
        'User-Agent': 'Mozilla/5.0 ',  
        'referer': 'https://www.douyu.com/g_LOL'
    }
    create_csv_headers()
    for x in range(1,10):
        url = 'https://www.douyu.com/gapi/rkc/directory/2_1/{}'.format(x)
        response = requests.get(url, headers = header)
        items = parse_page_detail(response)
        time.sleep(2)
        print('正在下载第%d页的数据...' % x)
    print('下载完成!')

def main():
    parse_page_list()

if __name__ == '__main__':
    main()

运行结果:


jupyter notebook

excel

总结

较简单也容易理解;
在操作过程中遇到这样一个问题:打开csv之后会乱码,这时候只需要用记事本打开这个文件,再把文件另存为ANSI类型,再用新的excel工作簿打开上述文件即可

你可能感兴趣的:(爬虫:爬取斗鱼房间信息)