爬取药监局

先通过谷歌获取药监局的源代码,url地址,form data
爬取药监局_第1张图片
因为企业名称是个url地址,所以要想抓取到企业信息,先抓取整个页面,再通过拼接url地址抓取到企业信息
通过request抓取整个页面,并把id存入列表

url = 'http://scxk.nmpa.gov.cn:81/xk/itownet/portalAction.do?method=getXkzsList'
headers = {
     
 "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36"
}
id_list = []
all_data_list = []
for i in range(1,6):
    page = str(i)
    data = {
     
        "on": "true",
        "page": "page",
        "pageSize": "15",
        "productName": "",
        "conditionType": "1",
        "applyname": "",
        "applysn": "",
    }

    json_ids = requests.post(url=url,headers=headers,data=data).json()
    for dic in json_ids['list']:
        id_list.append(dic['ID'])

在通过url拼接id,抓取企业信息
,并将信息保存到本地

post_url = 'http://scxk.nmpa.gov.cn:81/xk/itownet/portalAction.do?method=getXkzsById'
for id in id_list:
    data = {
     
        'id' : id
    }
    datail_json = requests.post(url=post_url,headers=headers,data=data).json()
    # print(datail_json)
    all_data_list.append(datail_json)
    fp = open('./alldata.json','w',encoding='utf-8')
    json.dump(all_data_list,fp=fp,ensure_ascii=False)
    print('over!!')

完整代码如下

import requests
import json
url = 'http://scxk.nmpa.gov.cn:81/xk/itownet/portalAction.do?method=getXkzsList'
headers = {
     
 "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36"
}
id_list = []
all_data_list = []
for i in range(1,6):
    page = str(i)
    data = {
     
        "on": "true",
        "page": "page",
        "pageSize": "15",
        "productName": "",
        "conditionType": "1",
        "applyname": "",
        "applysn": "",
    }

    json_ids = requests.post(url=url,headers=headers,data=data).json()
    for dic in json_ids['list']:
        id_list.append(dic['ID'])

post_url = 'http://scxk.nmpa.gov.cn:81/xk/itownet/portalAction.do?method=getXkzsById'
for id in id_list:
    data = {
     
        'id' : id
    }
    datail_json = requests.post(url=post_url,headers=headers,data=data).json()
    # print(datail_json)
    all_data_list.append(datail_json)
    fp = open('./alldata.json','w',encoding='utf-8')
    json.dump(all_data_list,fp=fp,ensure_ascii=False)
    print('over!!')



你可能感兴趣的:(爬虫,爬虫)