python+selenium获取网页代码

step1.安装selnium:pip3 install selnium

step2:下载firefox或chrome浏览器

step3:下载驱动firefox(https://github.com/mozilla/geckodriver/releases)或chrome驱动(https://sites.google.com/a/chromium.org/chromedriver/ 或者 http://npm.taobao.org/mirrors/chromedriver/),并将驱动放入python安装目录下(就是和python.exe在同一目录)


以股驿台网站(http://www.guyitai.net/stock/bankdata/)获取浙江-杭州地区的银行数据为例:

from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions
import codecs
# driver = webdriver.Chrome() #选择chrom或firefox浏览器
driver = webdriver.Firefox(executable_path = r'C:\Users\zk\AppData\Local\Programs\Python\Python36\geckodriver')
driver.get("http://www.guyitai.net/stock/bankdata/")

e1 = driver.find_element_by_xpath("//a[@value='0002']") #选择地区:浙江
e1.click()
e2 = driver.find_element_by_xpath("//a[@value='0002-0001']") #选择城市:杭州
WebDriverWait(driver, 10).until(expected_conditions.visibility_of(e2)) #等待页面
e2.click()
count=0
while True:
    count +=1
    locals()['page_'+str(count)]=driver.page_source
    filename=str()
    with codecs.open('page_'+str(count)+'.txt','w',encoding='utf-8') as f:  #保存网页源代码
        f.write( locals()['page_'+str(count)])
    try:
       clickbutton= driver.find_element_by_link_text('下一页')
       clickbutton.click()
    except:
        break
driver.close()  #关闭浏览器

#利用下载的源代码可利用BeautifulSoup进行html数据抓取

 
  

你可能感兴趣的:(python-selenium)