之前不论是看视频学习,还是跟着教程做深度学习,数据集都是现成的。这次从头开始开发一个马桶识别程序,就需要自己收集数据了。
还好互联网,特别是电商网站的发展,为产品积累了很多的数据。
可以通过Python抓取天猫和京东某品牌马桶的评论图片作为数据集。
通过百度搜索,有很多的程序代码可以实现评论图片的抓取,本文抓取天猫的评论图片参考以下链接的代码:Python爬虫(6)——获取天猫商品评论信
实现评论图片获取的难点在于url的提取,上述的链接给出了具体的方法,这里简述如下:
以下是一个url实例,
https://rate.tmall.com/list_detail_rate.htm?itemId=45492997665&spuId=64652363&sellerId=667286523&order=3¤tPage=1&append=⊙&content=1
前面https://rate.tmall.com/list_detail_rate.htm?都是固定的
itemId可以从商品详情的url中找到,以下是11170马桶的url,其中id=45492997665就是itemId
https://detail.tmall.com/item.htm?spm=a1z10.5-b-s.w4011-601288098.32.218191f88EYy6o&id=45492997665&rn=c00b3253858596ec80a7c4e9431e2848&abbucket=9
spuId是店铺Id,sellerId是店主Id,可以从上述商品详情页的源代码中找到
其中shopId就是spuId
currentPage是现在的页码,可以通过改变该值读取所有页面的评论图片。
其他的保持默认即可,以下是源代码:
import requests
import json
import urllib.request
url_s = 'https://rate.tmall.com/list_detail_rate.htm?'
url_itemId ='itemId=45492997665&'
url_spuId = 'spuId=64652363&'
url_sellerId = 'sellerId=667286523&'
url_order = "order=3&"
url_append ='append=⊙&'
count = 0
for pages in range(0,99):
url_currentPage ="currentPage="+str(pages+1)+"&"
url = url_s+url_itemId+url_spuId+url_sellerId+url_order+url_currentPage+url_append+'content=1'
req = requests.get(url)
jsondata = req.text[15:]
try:
data = json.loads(jsondata)
except:
continue
#输出页面信息
print('page:',data['paginator']['page'])
#遍历评论信息列表
for i in data["rateList"]:
for url_image in i['pics']:
if count<9:
name = '0000'+str(count+1)+'.jpg'
elif count<99:
name = '000'+str(count+1)+'.jpg'
elif count<999:
name= '00'+str(count+1)+'.jpg'
elif count<9999:
name= '0'+str(count+1)+'.jpg'
else:
name= str(count+1)+'.jpg'
conn = urllib.request.urlopen("http:"+url_image)
f = open(name, 'wb')
f.write(conn.read())
f.close()
count+=1
通过对以上url的分析,可以通过python程序自动获取itemID, sellerId, spuId。这样,只要拷贝下产品页的url,就可以顺畅地下载评论图片了,具体实现代码如下:
import requests
import json
import urllib.request
import re
# Get the comment url from url of goods detail url
def geturl(url_detail):
req = requests.get(url_detail)
jsondata = req.text[15:]
info = re.search('itemId:"[0-9]*",sellerId:"[0-9]*",shopId:"[0-9]*"',jsondata)
info = info.group(0)
info = info.split(',')
itemId = info[0].split(':')[1][1:-1]
sellerId = info[1].split(':')[1][1:-1]
shopId = info[1].split(':')[1][1:-1]
return itemId, sellerId, shopId
# Download the comment images
def getImage(url_detail):
url_s = 'https://rate.tmall.com/list_detail_rate.htm?'
itemId, sellerId, shopId = geturl(url_detail)
url_itemId = 'itemId='+itemId+'&'
url_spuId = 'spuId='+shopId+'&'
url_sellerId = 'sellerId='+sellerId+'&'
url_order = "order=3&"
url_append ='append=⊙&'
count = 0
for pages in range(0,99):
url_currentPage ="currentPage="+str(pages+1)+"&"
url = url_s+url_itemId+url_spuId+url_sellerId+url_order+url_currentPage+url_append+'content=1'
req = requests.get(url)
jsondata = req.text[15:]
try:
data = json.loads(jsondata)
except:
continue
print('page:',data['paginator']['page'])
for i in data["rateList"]:
for url_image in i['pics']:
if count<9:
name = '0000'+str(count+1)+'.jpg'
elif count<99:
name = '000'+str(count+1)+'.jpg'
elif count<999:
name= '00'+str(count+1)+'.jpg'
elif count<9999:
name= '0'+str(count+1)+'.jpg'
else:
name= str(count+1)+'.jpg'
conn = urllib.request.urlopen("http:"+url_image)
f = open(name, 'wb')
f.write(conn.read())
f.close()
count+=1
if __name__=="__main__":
url_detail = 'https://detail.tmall.com/item.htm?spm=a1z10.5-b-s.w4011-14601288098.32.218191f88EYy6o&id=45492997665&rn=c00b3253858596ec80a7c4e9431e2848&abbucket=9'
getImage(url_detail)