python 爬虫试手 requests+BeautifulSoup

工作需要,要爬取新浪微博数据,之前一直用java, 但是遇到页面加密很伤,转到python。先拿糗事百科试试python里爬虫的写法。

工具
requests
BeautifulSoup

工具参考
Python爬虫利器一之Requests库的用法
Python爬虫利器二之Beautiful Soup的用法

还有一个据说比较好用的PyQuery, 试用了下,难用的要死!class 里有空格就懵逼了。之前在Java里一直用Jsoup解析,比较顺手,相应的感觉比较适应于BeautifulSoup,废话不多说,搞起!

页面结构

python 爬虫试手 requests+BeautifulSoup_第1张图片

代码

import requests

from bs4 import BeautifulSoup

page = 1
rooturl = 'http://www.qiushibaike.com/hot/page/' + str(page)

# payload = {'key1': 'value1', 'key2': 'value2'}
# r = requests.get( rooturl, params=payload)
pageReq = requests.get(rooturl)

pageString = pageReq.text

doc = BeautifulSoup(pageString, "lxml")

parents = doc.find('div', id='content-left')

for elem in parents.find_all(class_="article block untagged mb15", recursive=False):
    authorName = ""
    if len(elem.find(class_="author clearfix").select('a')) ==2:
        authorName = elem.find(class_="author clearfix").select('a')[1]['title']
    content = elem.find(class_="content").get_text().strip()
    num_laugh = elem.find_all("i", class_="number")[0].get_text()
    num_comments = elem.find_all("i", class_="number")[1].get_text()
    print "author: " + authorName + "\n" + "content: " + content + "\n" + num_laugh + " " + num_comments
    print "***************************************************"
# target = soup.select('#content-left > .article block untagged mb15')

输出结果

python 爬虫试手 requests+BeautifulSoup_第2张图片

你可能感兴趣的:(java,python,爬虫,糗事百科)