BeautifulSoup编写PYTHON爬虫案例-下载MM图片

python小白学爬虫

断断续续学习python差不多也有2个月了,摸了不少坑磕磕绊绊的完成了这个爬虫,对我来说还算蛮有意义的,留档纪念下。

爬虫背景

目的:爬取网页前10页的MM图片
网页:http://jandan.net/ooxx(煎蛋网,正经网站)
分析过程:
1、查找当前页面多个图片的URL
2、查找下一页URL
3、循环打开10个页面分别保存每个页面的MM图片

代码如下:

import requests
from bs4 import BeautifulSoup

def img_urls(url):
#查找页面所有图片URL        
        header = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36'}
        res = requests.get(url,headers=header).text
        soup = BeautifulSoup(res,'html.parser')        
        a=soup.find_all('img')
        img_urls=['http:'+a[x]['src'] for x in range(len(a))]
        return img_urls

def next_url(url):
#查找下一页url
        header = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36'}
        res = requests.get(url,headers=header).text
        soup = BeautifulSoup(res,'html.parser')
        page_tag = soup.find('span',class_='current-comment-page')        
        next_url = 'http:'+page_tag.nextSibling.nextSibling['href']       
        return next_url
        
def save_img(url):
#保存图片
        header = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36'}
        res = requests.get(url,headers=header).content
        img_name = url.split('/')[-1]
        with open(img_name,'wb') as f:
                f.write(res)

def main():
        url = 'http://jandan.net/ooxx'
        number = 10
        #imgs = []
        while number:                
                number-=1
                imgs=img_urls(url)                
                for img_url in imgs:                        
                        save_img(img_url)                        
                url = next_url(url)                

if __name__=='__main__':
        main()      


目前水平就这样了,后期慢慢优化

你可能感兴趣的:(BeautifulSoup编写PYTHON爬虫案例-下载MM图片)