爬虫初尝试 | 易车网文章url爬取

目标网站:news.bitauto.com/

由于推荐页的加载更多不方便操作

选择单项页面爬取 例如新车页

爬虫初尝试 | 易车网文章url爬取_第1张图片

在页面右键选择 检查 

找到目标位置

 

爬虫初尝试 | 易车网文章url爬取_第2张图片

/html/body/div[3]/div/div[1]/div[3]/div/div/h2/a  (推荐使用Xpath helper 可以直接复制Xpath)

#coding: utf8
from selenium import webdriver
f=open("url6.txt","w",encoding="utf-8")
fw = open("news.txt", "w", encoding="utf-8")
driver = webdriver.Chrome('C:\Program Files (x86)\Google\Chrome\Application\chromedriver')


def geturl(url,k):
    driver.get(url)
    urls = driver.find_elements_by_xpath('//div[@class="article-card horizon"]//a') #目标url存在于多个位置 可以选一个方便找到的
    url_list=[]
    for url in urls:
        u = url.get_attribute('href')
        if u == 'None':  
            continue
        else:
            url_list.append(str(url.get_attribute("href")))
    url_list=list(set(url_list))
    #print(url_list)
    for new_url in url_list:
        if(len(new_url)<2):
            continue
        if(new_url[-1]=='l'):
            print(new_url)
            f.write(new_url+"\n")
    #
if __name__ == '__main__':
    #url= 'http://news.bitauto.com/xinche/'
    a_list=[("xinche",4786)]
    for t,am in a_list:
        url = "http://news.bitauto.com/" + t + "/?pageindex="
        k=len(t)
        for i in range(1, am):
            new_url = url + str(i)
            print(t," page:", i)
            geturl(new_url,k)
    f.close()
    driver.close()

 

你可能感兴趣的:(爬虫,nlp)