python爬取网页信息心得

 先是干货

配置好Python之后请在cmd里敲如下命令:

pip install lxml

pip install beautifulsoup4

pip install html5lib

pip install requests

然后是python代码,爬取前程无忧网的,

import csv
import requests
from bs4 import BeautifulSoup

url = "https://search.51job.com/list/030200%252C040000,000000,0000,00,9,99,%25E8%25BD%25AF%25E4%25BB%25B6%25E5%25BC%2580%25E5%258F%2591%25E5%25B7%25A5%25E7%25A8%258B%25E5%25B8%2588,2,21.html?lang=c&stype=1&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&lonlat=0%2C0&radius=-1&ord_field=0&confirmdate=9&fromType=&dibiaoid=0&address=&line=&specialarea=00&from=&welfare="

r = requests.get(url)
#
f = open("neituiWeb2.csv", "a", newline="")
writer = csv.writer(f)

soup = BeautifulSoup(r.content, "lxml")

link = soup.find("div", {"id": "resultList"}).find("div", {"class": "el title"}).next_siblings
# print(soup.get_text())
# sibs = bs.find("table", {"id": "giftList"}).tr.next_sibling.next_sibling

for item in link:
    # print(item)
    try:
        t1= item.find("p", class_='t1').a.text.strip()
        t2 = item.find("span", class_='t2').text
        t3 = item.find("span", class_='t3').text
        t4 = item.find("span", class_='t4').text
        t5 = item.find("span", class_='t5').text
        writer.writerow([t1, t2, t3, t4, t5])
    except:
        pass

最后心得:先用find找到单个的内容,之后再用find_all和for来循环查找所有的。

                   还有就是用find("table", {"class": "giftList"})这种形式,会有很多问题出来,不信可以将find("p", class_='t1')

                   改一下。

 
 

你可能感兴趣的:(python爬取网页信息心得)