在学长的指导下自学了requests包,lxml包和selenium包。按要求写出了一个简单的爬虫
爬取百度搜索结果
主要还要借助xpath helper谷歌浏览器的插件来操作更容易找到需要查找信息的xpath位置
还要首先了解一下百度搜索请求的参数 lm默认为0,天数限制,但是好像只有1有用。
默认每页10条信息,rn
pn是页码
from lxml import etree
import re
import requests
import string
import json
headers = {
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36"
}
response = requests.get('https://www.baidu.com/s?wd=腾讯视频优惠&lm=1',headers=headers)
r = response.text
html = etree.HTML(r,etree.HTMLParser())
r1 = html.xpath('//h3')
r2 = html.xpath('//*[@class="c-abstract"]')
r3 = html.xpath('//a[@class="c-showurl"]')
for i in range(10) :
r11 = r1[i].xpath('string(.)')
r22 = r2[i].xpath('string(.)')
r33 = r3[i].xpath('string(.)')
# with open('test.txt', 'a', encoding='utf-8') as f:
# f.write(json.dumps(r11,ensure_ascii=False) + '\n')
# f.write(json.dumps(r22, ensure_ascii=False) + '\n')
# f.write(json.dumps(r33, ensure_ascii=False) + '\n')
print(r11,end='\n')
print(r22,end='\n')
print(r33)
print()
只是爬取了搜索的信息标题,信息描述和信息源地址
原博地址https://blog.csdn.net/legendary_Dragon/article/details/81412096
爬取结果如下