第五章 爬虫进阶(十) 2020-01-27

十、 实战-高速下载全套王者荣耀高清壁纸(3


续上例,下载第一页壁纸

示例代码:


import requests

from urllib import parse

import os

from urllib import request

 

headers= {

    "User-Agent": "Mozilla/5.0(Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko)Chrome/79.0.3945.130 Safari/537.36",

    "Referer":"http://pvp.qq.com/web201605/wallpaper.shtml",

    "Cookie":"tvfe_boss_uuid=488a8e48c1abea6d; pgv_pvi=7952056320; pgv_pvid=8762704384;pac_uid=0_5df3abeb9d26a; _ga=GA1.2.307520198.1578669164;Qs_lvt_323937=1578669166; Qs_pv_323937=3014627143395298000; isHostDate=18287;isOsSysDate=18287; isOsDate=18287; PTTuserFirstTime=1579996800000;PTTosSysFirstTime=1579996800000; PTTosFirstTime=1579996800000;pgv_info=ssid=s4291900342; ts_last=pvp.qq.com/web201605/wallpaper.shtml;ts_uid=5989698568; weekloop=0-0-0-5; eas_sid=h1h5v8R0g0W4z763z3Q9p4z3d4;gpmtips_cfg=%7B%22iSendApi%22%3A0%2C%22iShowCount%22%3A0%2C%22iOnlineCount%22%3A0%2C%22iSendOneCount%22%3A0%2C%22iShowAllCount%22%3A0%2C%22iHomeCount%22%3A0%7D;25ccfec4f8bd9940e1abeafb17ed5209=1;pvpqqcomrouteLine=wallpaper_wallpaper_wallpaper_wallpaper;PTTDate=1580048552956"

}

 

 

def extract_images(data):

    image_urls = []

    for xin range(1, 9):

        image_url =parse.unquote(data['sProdImgNo_%d'% x]).replace("200", "0")

        image_urls.append(image_url)

    return image_urls

 

 

def main():

    page_url ="http://apps.game.qq.com/cgi-bin/ams/module/ishow/V1.0/query/workList_inc.cgi?activityId=2735&sVerifyCode=ABCD&sDataType=JSON&iListNum=20&totalpage=0&page=0&iOrder=0&iSortNumClose=1&iAMSActivityId=51991&_everyRead=true&iTypeId=2&iFlowId=267733&iActId=2735&iModuleId=2735&_=1580125769997"  # 从worklist中获取,一共两个worklist,不对就换一个

    resp = requests.get(page_url,headers=headers)

    result = resp.json()

    datas = result['List']

    for data in datas:

        image_urls = extract_images(data)

        name =parse.unquote(data['sProdName']).replace("1:1","").strip()  #报错,用replace删去1:1与用strip()删去空格

        # images/猪八戒-年年有余1.jpg

        dirpath =os.path.join("images", name)

        os.mkdir(dirpath)

        for index, image_url inenumerate(image_urls):

            request.urlretrieve(image_url,os.path.join(dirpath, "%d.jpg" % (index+1)))

            print("%s下载完成!" % (image_url))

 

 

if__name__ == '__main__':

    main()



上一篇文章 第五章 爬虫进阶(九) 2020-01-26 地址:

https://www.jianshu.com/p/cbb8f839401b

下一篇文章 第五章 爬虫进阶(十一) 2020-01-28 地址:

https://www.jianshu.com/p/49e11ba54a3b



以上资料内容来源网络,仅供学习交流,侵删请私信我,谢谢。

你可能感兴趣的:(第五章 爬虫进阶(十) 2020-01-27)