python爬虫实战之美女图

最近学习python爬虫,写了一个简单的递归爬虫下载美女图片的程序。废话不多说,先上图:


python爬虫实战之美女图_第1张图片
捕获.JPG
python爬虫实战之美女图_第2张图片
2.JPG

一共是三千多张美图哦:)
python版本为3.5,使用urllib和urllib.request访问网页,用BeautifulSoup解析获取到的html,找到主页面中的图片链接和新的页面的链接,下载完图片后,依次访问新的链接,进行递归爬虫,直到递归到最深层。其中集合set存放已爬过的页面,以免访问到相同的页面。源码如下:
import urllib
import urllib.request
import re
import time
from threading import *
from bs4 import BeautifulSoup

screenLock = Semaphore(value=1)
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11'}
main_url = 'http://www.chunmm.com'
num = 1
pages = set()
pages.add(main_url)

def downloadimg(url, depth):
if depth != 0:
print(depth)
print(url)
req = urllib.request.Request(url, headers=headers)
html = urllib.request.urlopen(req).read().decode('utf-8')
soup = BeautifulSoup(html, 'html.parser')
imgurllist = soup.find_all('img', {'src': re.compile(r'http://.+.jpg')})
urllist = soup.find_all('a', {'href': re.compile(r'/.+?/.+?.html')})
local_path = 'd:/OOXXimg/'
global num
for item in imgurllist:
print(item["src"])
url = item["src"]
path = local_path + str(num) + '.jpg'
urllib.request.urlretrieve(url, path)
num += 1
screenLock.acquire()
print(str(num)+' img was downloaded\n')
screenLock.release()

    for url in urllist:
        if url not in pages:
            global main_url
            newurl = main_url+url["href"]
            downloadimg(newurl, depth-1)
            pages.add(url)
            time.sleep(1)
else:
    return

def main():
downloadimg(main_url, 3)

if name == 'main':
main()

注意:最好在访问页面时加上异常处理,以免访问页面时url出现异常导致程序退出。该实例程序递归层次为3层,共下载3000多张图片。
多谢阅读!

你可能感兴趣的:(python爬虫实战之美女图)