完美解决爬虫时遇到的'NoneType' object has no attribute 'find'或'NoneType' object has no attribute 'find_all'问题

在网上看到了一个爬虫教程,就跟着学了起来,出现了点问题:‘NoneType’ object has no attribute ‘find’;
完美解决爬虫时遇到的'NoneType' object has no attribute 'find'或'NoneType' object has no attribute 'find_all'问题_第1张图片

问题说明

我是一个刚入门的小白,刚研究了点爬虫,我觉得这个问题其实就是没有找到相应的html element(网页元素),所以没有相应的元素方法,所以报错“no attribute”。
只要我们准确的找到相应的元素,就可以用BeautifulSoup中的方法进行解析了。
完美解决爬虫时遇到的'NoneType' object has no attribute 'find'或'NoneType' object has no attribute 'find_all'问题_第2张图片
下图是问题出现的地方:
完美解决爬虫时遇到的'NoneType' object has no attribute 'find'或'NoneType' object has no attribute 'find_all'问题_第3张图片
打印出来的type(data)为nonetype,所以就是没有找到元素。于是我们可以使用另一种寻找元素的方法

data = body.find('div',attrs={'id':'7d'})

用属性标签attrs来寻找 id=7d 或class=c7d就可以了

怎么设置文件保存目录

文件目录我们可以自己设置保存位置,代码如下:

def write_data(data,name):
    file_name =name
    with open(file_name,'a',errors='ignore',newline='') as f:
        f_csv =csv.writer(f)
        f_csv.writerows(data)

if __name__ == '__main__':
    url = 'http://www.weather.com.cn/weather/101190401.shtml'
    html = get_content(url)
    result = get_data(html)
    write_data(result,'G:\weather.csv')  # 保存的位置我设置在了G盘根目录

爬取结果

完美解决爬虫时遇到的'NoneType' object has no attribute 'find'或'NoneType' object has no attribute 'find_all'问题_第4张图片

完整代码

import requests
import time
import csv
import random
import socket
import http.client
import bs4
from bs4 import BeautifulSoup

def get_content(url,data = None):
    header = {
        'Accept':'image/webp,image/apng,image/*,*/*;q=0.8',
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'
    }
    timeout = random.choice(range(80,180))
    while True:
        try:
            rep = requests.get(url,headers =header,timeout = timeout)
            rep.encoding= 'utf-8'
            break
        except socket.timeout as e:
            print('3:', e)
            time.sleep(random.choice(range(8, 15)))

        except socket.error as e:
            print('4:', e)
            time.sleep(random.choice(range(20, 60)))

        except http.client.BadStatusLine as e:
            print('5:', e)
            time.sleep(random.choice(range(30, 80)))

        except http.client.IncompleteRead as e:
            print('6:', e)
            time.sleep(random.choice(range(5, 15)))
    return rep.text

def get_data(html_text):
    final=[]
    bs = BeautifulSoup(html_text,"html.parser")  # 创建BS对象
    body = bs.body
    data = body.find('div',attrs={'id':'7d'})
    # data = body.find('div',{'div':'7d'})
    print(type(data))
    ul = data.find('ul')
    li =ul.find_all('li')
    for day in li:
        temp = []
        date = day.find('h1').string
        temp.append(date)
        inf = day.find_all('p')
        temp.append(inf[0].string)
        if inf[1].find('span') is None:
            temperature_highest = None
        else:
            temperature_highest=inf[1].find('span').string
            temperature_highestm  =temperature_highest.replace("℃","")
        temperature_lowest = inf[1].find('i').string
        temperature_lowest = temperature_lowest.replace('℃','')
        temp.append(temperature_highest)
        temp.append(temperature_lowest)
        final.append(temp)
    return final

def write_data(data,name):
    file_name =name
    with open(file_name,'a',errors='ignore',newline='') as f:
        f_csv =csv.writer(f)
        f_csv.writerows(data)

if __name__ == '__main__':
    url = 'http://www.weather.com.cn/weather/101190401.shtml'
    html = get_content(url)
    result = get_data(html)
    write_data(result,'G:\weather.csv')

希望帮大家解决一定的问题。。。。

你可能感兴趣的:(爬虫)