selenium 爬取中国经济社会发展数据库数据 (2)优化爬取速度和解决StaleElementReferenceException

**求点赞!!!
**

上一版本的爬取测试代码在这
第一版爬取代码

对上一版本进行了一些测试,主要问题有两个

  1. time.sleep()作为固定等待时长导致了爬取速度过慢,同时,也有可能存在网速问题等客观因素导致的抛错

  2. 在进行大量数据爬取时,会抛出StaleElementReferenceException错误

对代码进行了一些优化

def find_table(indicator, region, year):
    wait = WebDriverWait(
        browser,
        40,
        ignored_exceptions=(NoSuchElementException, StaleElementReferenceException),
    )
    browser.find_element_by_name("IndicateName").clear()
    browser.find_element_by_name("IndicateName").send_keys(indicator)
    browser.find_element_by_name("IndicateRegion").clear()
    browser.find_element_by_name("IndicateRegion").send_keys(region)
    browser.find_element_by_name("EndYear").send_keys(year)
    element = browser.find_element_by_id("AdvancedSearch")
    browser.execute_script(
        "arguments[0].click();", element
    )  # arguments[0], element means passing element into arguments[0] to execute in Javascript
    xpath = "/html/body/div[1]/div[3]/div[1]/div/div[2]/table/tbody/tr"

    results = wait.until(EC.element_to_be_clickable((By.ID, "AdvancedSearch")))
    table = results.find_elements_by_xpath(xpath)

    try:
        table = results.find_elements_by_xpath(xpath)
        ls = []
        for tr in table:
            tds = tr.find_elements_by_tag_name("td")
            ls.append([td.text for td in tds])
    except EC.StaleElementReferenceException:
        table = results.find_elements_by_xpath(xpath)
        ls = []
        for tr in table:

            tds = tr.find_elements_by_tag_name("td")
            ls.append([td.text for td in tds])
    table_df = pd.DataFrame(ls).iloc[:, 2:7]
    table_df_screened = table_df[table_df[4] == indicator]
    return table_df_screened

通过try excpet函数解决了StaleElementReferenceException
WebDriveWait方法对等待时长进行了优化

你可能感兴趣的:(web,scraping)