用scrapy框架爬取京东商品信息并存入mysql

背景

继上篇解决八爪鱼数据采集工具速度慢的问题,八爪鱼免费的自定义模式平均每分钟采集10条数据,而用scrapy则接近100条数据每分钟

问题

  1. 上网找了很多代码,由于没接触过scrapy框架,直接把别人的代码复制到idle运行,但发现并没有执行
    【解决】:原来网上的代码全都是写了spider类而已,没有执行程序,而scrapy框架需要在python scripts目录下命令行创建项目(只要在安装scrapy的目录下创建就好),用scrapy框架爬取京东商品信息并存入mysql_第1张图片再在写spider文件夹调加爬虫文件用scrapy框架爬取京东商品信息并存入mysql_第2张图片圈出来的文件就是自己写的爬虫
  2. 无法导入JingdongItem模块
    用scrapy框架爬取京东商品信息并存入mysql_第3张图片【解决】:直接不要这个模块,自己写用scrapy框架爬取京东商品信息并存入mysql_第4张图片而且直接放在爬虫文件里面可以防止出现无法调用自生成的items文件
  3. 如何找到可以爬取的url
    【解决】:
    (1)京东商城的手机信息 URL:https://list.jd.com/list.html?cat=9987,653,655&page=1
    (2)谷歌浏览器的京东页面找到每一类商品对应的cat,然后替换就好了
    (3)以电脑为例:京东搜索“电脑”,下拉到翻页的位置——F12,找出“下一页”按钮对应的元素——最长的cat3一般就是所要的
    用scrapy框架爬取京东商品信息并存入mysql_第5张图片

代码

from __future__ import absolute_import
import scrapy
from scrapy.http import Request
#from jingdong.items import JingdongItem,IdItem
import re
import urllib.error
import urllib.request
import pymysql

class JingdongItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    all_class=scrapy.Field()
    next_class_name = scrapy.Field()
    next_class_url = scrapy.Field()
    book_name = scrapy.Field()
    book_url = scrapy.Field()
    comment = scrapy.Field()
    price = scrapy.Field()
    publisher_name = scrapy.Field()
    publisher_url = scrapy.Field()
    publish_time = scrapy.Field()
    author = scrapy.Field()
    original_price = scrapy.Field()
class JdSpider(scrapy.Spider):
    name = 'jd'
    allowed_domains = ['jd.com']
    #start_urls = ['http://jd.com/']
    header = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36"}
    #fh = open("D:/pythonlianxi/result/4.txt", "w")
    def start_requests(self):
        return [Request("https://list.jd.com/list.html?cat=9987,653,655&page=1",callback=self.parse,headers=self.header,meta={"cookiejar":1})]
    def use_proxy(self,proxy_addr,url):
        try:
            req=urllib.request.Request(url)
            req.add_header("User-Agent","Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36")
            proxy = urllib.request.ProxyHandler({"http": proxy_addr})
            opener = urllib.request.build_opener(proxy, urllib.request.HTTPHandler)
            urllib.request.install_opener(opener)
            data=urllib.request.urlopen(req).read().decode("utf-8","ignore")
            return data
        except urllib.error.URLError as e:
            if hasattr(e,"code"):
                print(e.code)
            if hasattr(e,"reason"):
                print(e.reason)
        except Exception as e:
            print(str(e))


    def parse(self, response):
        item=JingdongItem()
        print("1")
        proxy_addr = "61.135.217.7:80"
        try:
            item["title"]=response.xpath("//div[@class='p-name']/a[@target='_blank']/em/text()").extract()
            item["pricesku"] =response.xpath("//li[@class='gl-item']/div/@data-sku").extract()

            for j in range(2,166):
                url="https://list.jd.com/list.html?cat=9987,653,655&page="+str(j)
                print(j)
                #yield item
                yield Request(url)
            pricepat = '"p":"(.*?)"'
            personpat = '"CommentCountStr":"(.*?)",'
            print("2k")
            #fh = open("D:/pythonlianxi/result/5.txt", "a")
            conn = pymysql.connect(host="127.0.0.1", user="root", passwd="填写自己的密码", db="目标数据库", charset="utf8")
            print("1")
            for i in range(0,len(item["pricesku"])):
                priceurl="https://p.3.cn/prices/mgets?&ext=11000000&pin=&type=1&area=1_72_4137_0&skuIds="+item["pricesku"][i]
                personurl = "https://club.jd.com/comment/productCommentSummaries.action?referenceIds=" + item["pricesku"][i]
                pricedata=self.use_proxy(proxy_addr,priceurl)
                price=re.compile(pricepat).findall(pricedata)
                persondata = self.use_proxy(proxy_addr,personurl)
                person = re.compile(personpat).findall(persondata)
         
                title=item["title"][i]
                print(title)
                price1=float(price[0])
                #print(price1)
                person1=person[0]
                #fh.write(tile+"\n"+price+"\n"+person+"\n")
                cursor = conn.cursor()
                sql = "insert into phone(title,price,person) values(%s,%s,%s);"
                params=(title,price1,person1)
                cursor.execute(sql,params)
                conn.commit()

            #fh.close()
            conn.close()
            return item
        except Exception as e:
            print(str(e))


你可能感兴趣的:(爬虫)