做B站up主相薄爬虫

并不保证爬取所有想用的图片

先是对up主相簿页爬所有图片动态的地址存在文本文件中

from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver import ActionChains
import time
from tellwlib.py import download
from selenium.webdriver.common.by import By

url = 'https://space.bilibili.com/177023891/album'
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
driver = webdriver.Chrome(chrome_options=chrome_options)
#driver = webdriver.Chrome()
driver.get(url)
try:
    for i in range(1, 9):#该up主相簿页面有8页
        WebDriverWait(driver, 20, 0.5).until(EC.presence_of_all_elements_located((By.CLASS_NAME, 'picture')))
        doctors = driver.find_elements_by_class_name('picture')
        with open('beyond.txt', 'a') as f:
            for doctor in doctors:
                f.write(doctor.get_attribute('href')+'\n')
        pages = driver.find_elements_by_class_name('panigation')
        for page in pages:
            if page.text == str(i+1):
                break
        ActionChains(driver).move_to_element(page).click(page).perform()
        time.sleep(5)
finally:
    driver.close()

爬到相簿页的最后一页的时候会因为点击最后一个按钮(因为没有第9页的按钮)而报错,程序也应该在此时终止

接下来从文本文件依次取出图片动态页地址访问爬取图片

from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver import ActionChains
import time
from tellwlib.py import download
from selenium.webdriver.common.by import By

chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
with open('beyond.txt', 'r') as f:
    contents = f.readlines()
for idx, content in enumerate(contents):
    print('processing %dth link %s'%(idx+1, content.strip()))
    driver = webdriver.Chrome(chrome_options=chrome_options)
    driver.get(content.strip())
    try:
        WebDriverWait(driver, 20, 0.5).until(EC.presence_of_all_elements_located((By.CLASS_NAME, 'images')))
        pics = driver.find_elements_by_css_selector('.images > img')
        for pic in pics:
            picurl = pic.get_attribute('src')
            download.download_file(picurl, 'beyond/'+picurl.split('/')[-1])
    finally:
        driver.close()

目前做不出Chrome的单例类或者类变量,所以还是有些耗资源的,希望自己未来能有所进步。

参考链接:

Python+Selenium+ChromeDriver之浏览器爬虫入门

你可能感兴趣的:(b站,爬虫)