这两天做的python课设有一个关于python爬虫的题目,要求是从某宝爬取,那今天就来个某宝的商品信息爬取的内容吧!
https://s.taobao.com/search?q=%E7%BE%BD%E7%BB%92%E6%9C%8D&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20201220&ie=utf8
, https://s.taobao.com/search?q=%E7%BE%BD%E7%BB%92%E6%9C%8D&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20201220&ie=utf8&bcoffset=3&ntoffset=3&p4ppushleft=1%2C48&s=44
,https://s.taobao.com/search?q=%E7%BE%BD%E7%BB%92%E6%9C%8D&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20201220&ie=utf8&bcoffset=0&ntoffset=6&p4ppushleft=1%2C48&s=88
https://s.taobao.com/search?
+q=编码后的字符
+&s=(页码 - 1) x 44
,url编码可以用urllib.parse.quote(‘字符’)就行了,先整个20页。 key = '手套'
key = parse.quote(key)
url = 'https://s.taobao.com/search?q={}&s={}'
page = 20
for i in range(page):
url_page = url.format(key, i * 44)
print(url_page)
然后当我们按照正常步骤构造headers请求头,用get()方法获取的时候,你会发现,呦吼,炸了,不行,返回的内容不对,唉,那该咋整啊,作业咋办啊,面向csdn编程不是随便说说的,然后我就知道了,用爬虫爬淘宝,需要“假登录”,获取头部headers信息,我们只弄个ua是肯定不行的,然后把弄好的headers作为参数传给qequests.get(url,headers = header)就行了,那该咋弄这个headers啊,右键,打开浏览器抓包工具,然后network,ctrl+r刷新一波,在all里面找到search?开头的,对他进行右键,copy as curl(bash),然后打开https://curl.trillworks.com/,然后粘贴,右边Python requests里直接复制headers,再请求就完事了!
title = re.findall(r'\"raw_title\"\:\"(.*?)\"', response)
nick = re.findall(r'\"nick\"\:\"(.*?)\"', response)
item_loc = re.findall(r'\"item_loc\"\:\"(.*?)\"', response)
price = re.findall(r'\"view_price\"\:\"(.*?)\"', response)
sales = re.findall(r'\"view_sales\"\:\"(.*?)\"', response)
正则表达式匹配得到的内容是一个列表,
from urllib import parse
from fake_useragent import UserAgent
import requests
import re
import time
import csv
import os
def get_response(url):
ua = UserAgent()
headers = {
'authority': 's.taobao.com',
'cache-control': 'max-age=0',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'sec-fetch-site': 'same-origin',
'sec-fetch-mode': 'navigate',
'sec-fetch-user': '?1',
'sec-fetch-dest': 'document',
'accept-language': 'zh-CN,zh;q=0.9',
'cookie': 'cna=jKMMGOupxlMCAWpbGwO3zyh4; tracknick=tb311115932; _cc_=URm48syIZQ%3D%3D; thw=cn; hng=CN%7Czh-CN%7CCNY%7C156; miid=935759921262504718; t=bd88fe30e6685a4312aa896a54838a7e; sgcookie=E100kQv1bRHxrwnulL8HT5z2wacaf40qkSLYMR8tOCmVIjE%2FxrR5nzhju3UySug2dFrigMAy3v%2FjkNElYj%2BDcqmgdA%3D%3D; uc3=nk2=F5RGNwnC%2FkUVLHU%3D&vt3=F8dCuf2OXoGHiuEl2D8%3D&id2=VyyUy7sStBYaoA%3D%3D&lg2=U%2BGCWk%2F75gdr5Q%3D%3D; lgc=tb311115932; uc4=nk4=0%40FY4NAq0PgYBeuIHFyHE%2F9QSZnG6juw%3D%3D&id4=0%40VXtbYhfspVba1o0MN1OuNaxcY%2BUP; enc=tJQ9f26IYMQmwsNzfEZi6fJNcflLvL6bdcU4yyus3rqfsM37Mpy1jvcSMZ%2BYSaE5vziMtC9svi%2B4JVMfCnIsWA%3D%3D; _samesite_flag_=true; cookie2=112f2a76112f88f183403c6a3c4b721f; _tb_token_=eeeb18eb59e1; tk_trace=oTRxOWSBNwn9dPyorMJE%2FoPdY8zfvmw%2Fq5v3iwJfzrr80CDMiLUbZX4jcwHeizGatsFqHolN1SmeHD692%2BvAq7YJ%2FbITqs68WMjdAhcxP7WLdArSe8thnE40E0eWE4GQTvQP9j5XSLFbjZAE7XgwagUcgW%2Fg6rXAuZaws1NrrZksnq%2BsYQUb%2FHT%2Fa1m%2Fctub0jBbjlmp8ZDJGSpGyPMgg561G3vjIRPVnkhRCyG9GgwteJUZAsyQIkeh7xtdyN%2BF50TIambWylXMZhQW7LQGZ48rHl3Q; lLtC1_=1; v=0; mt=ci=-1_0; _m_h5_tk=b0940eb947e1d7b861c7715aa847bfc7_1608386181566; _m_h5_tk_enc=6a732872976b4415231b3a5270e90d9c; xlly_s=1; alitrackid=www.taobao.com; lastalitrackid=www.taobao.com; JSESSIONID=136875559FEC7BCA3591450E7EE11104; uc1=cookie14=Uoe0ZebpXxPftA%3D%3D; tfstk=cgSFBiAIAkEUdZx7kHtrPz1rd-xdZBAkGcJ2-atXaR-zGpLhi7lJIRGJQLRYjef..; l=eBI8YSBIOXAWZRYCBOfaourza779sIRYSuPzaNbMiOCP9_fp5rvCWZJUVfT9CnGVh6SBR3-wPvUJBeYBqnY4n5U62j-la_Dmn; isg=BAsLX3b80AwyYAwAj8PO7RC0mq_1oB8iDqsYtX0I5sqhnCv-BXFHcGI-cpxyuXca',
}
response = requests.get(url, headers=headers).content.decode('utf-8')
# "raw_title":"卡蒙手套女2020秋冬季新款运动保暖护手休闲针织触屏防寒羊皮手套"
# "view_price":"208.00"
# "nick":"intersport旗舰店"
# "item_loc":"江苏 连云港"
# "view_sales":"0人付款"
title = re.findall(r'\"raw_title\"\:\"(.*?)\"', response)
nick = re.findall(r'\"nick\"\:\"(.*?)\"', response)[:-1]
item_loc = re.findall(r'\"item_loc\"\:\"(.*?)\"', response)
price = re.findall(r'\"view_price\"\:\"(.*?)\"', response)
sales = re.findall(r'\"view_sales\"\:\"(.*?)\"', response)
return [title, nick, item_loc, price, sales]
def tocsv(file, filename):
with open(filename, 'a+', encoding='utf-8') as f:
f.seek(0)
write = csv.writer(f)
if f.read() == '':
write.writerow(('标题', '店铺', '地点', '价格', '付款人数'))
for i in range(len(file[0])):
write.writerow((file[0][i], file[1][i], file[2][i], file[3][i], file[4][i]))
if __name__ == '__main__':
filename = 'taobao.csv'
key = '手套'
key = parse.quote(key)
url = 'https://s.taobao.com/search?q={}&s={}'
page = 20
if os.path.exists('taobao.csv'):
os.remove('taobao.csv')
for i in range(page):
url_page = url.format(key, i * 44)
print(url_page)
res = get_response(url_page)
time.sleep(1)
tocsv(res, filename=filename)
可是因为爬取次数太多,淘宝不给我爬了,然后我就想用selenium试试吧,结果登录过了,还是不能过验证码那一关
然后,他就这样了!
这样的话就只能尝试购买代理IP或者用技术破解滑块了,代理好贵,滑块好难
亲测用免费的代理池不得行,哭辽
咳,最近进展,我发现过了两天他又能用了,应该只是暂时的限制,问题不大,当出现验证码拦截的时候等几天就行了