python爬虫实战笔记---selenium爬取QQ空间说说并存至本地

关于selenium和PhantomJS的使用,见python爬虫实战笔记---selenium爬取QQ空间说说并存至本地(上)

以下,正式开始对QQ空间进行分析和爬取
站点分析:
此版本的QQ空间,是以笔者自己的QQ空间为基点,利用selenium驱动PhantomJS,从上往下按照时间顺序爬取好友发过的说说文字,按照好友名进行分类
进入QQ空间首页,选择账号密码登录
python爬虫实战笔记---selenium爬取QQ空间说说并存至本地_第1张图片
在Chrome调试器下,找到账号密码的输入框标签源码
python爬虫实战笔记---selenium爬取QQ空间说说并存至本地_第2张图片

利用selenium进行模拟登录

 
   
 
   
from  selenium import webdriver
import time

browser = webdriver.Chrome()
browser.get('https://user.qzone.qq.com')
username =
password =

browser.switch_to_frame('login_frame')
log = browser.find_element_by_id("switcher_plogin")
log.click()
time.sleep(1)
username = browser.find_element_by_id('u')
username.send_keys(username)
ps = browser.find_element_by_id('p')
ps.send_keys(password)
btn = browser.find_element_by_id('login_button')
time.sleep(1)
btn.click()
time.sleep(2)
browser.get('https://user.qzone.qq.com/{}'.format(username))
browser.quit()


 
   
 
   
 
   
 
   
 
   
 
  

OK,这样就实现了模拟登录
接下来,就要进入说说主页,实现爬虫了
from  selenium import webdriver
import time
from bs4 import BeautifulSoup

browser = webdriver.Chrome()
browser.get('https://user.qzone.qq.com')
user ='2413843731'
password ='960306feng'

browser.switch_to_frame('login_frame')
log = browser.find_element_by_id("switcher_plogin")
log.click()
time.sleep(1)
username = browser.find_element_by_id('u')
username.send_keys(user)
ps = browser.find_element_by_id('p')
ps.send_keys(password)
btn = browser.find_element_by_id('login_button')
time.sleep(1)
btn.click()
time.sleep(2)
browser.get('https://user.qzone.qq.com/{}'.format(username))

friendAction = browser.find_element_by_id('tab_menu_friend')

time.sleep(5)
html = browser.page_source
soup = BeautifulSoup(html,"html.parser")
a = soup.find_all('a', attrs={'data-clicklog': 'nick'})
for i in range(len(a)):
    if(b[i].text == None):
        b[i].text = '无法显示'
    print(a[i].text + '\n' + b[i].text)





































你可能感兴趣的:(爬虫,python)