实战项目之-scrapy框架爬取链家网数据

只是闲来无事的时候做的这么一个小项目,只爬取了100页数据,获取到的数据,如下图所示:

实战项目之-scrapy框架爬取链家网数据_第1张图片

 

仅展示一下spider页面:

# -*- coding: utf-8 -*-
import scrapy

class LianjiaSpider(scrapy.Spider):
    name = 'lianjia'
    allowed_domains = ['lianjia.com']
    #初始url
    start_urls = ['https://bj.lianjia.com/ershoufang/pg1']
    
    def parse(self, response):
        # with open('lianjia.html','wb')as f:
        #     f.write(response.body)
        #匹配所有的li标签
        li_cards = response.xpath('//li[@class="clear LOGCLICKDATA"]')
        #遍历每一条li标签
        for i in li_cards:
            item = {}
            item['title'] = i.xpath('./div/div/a/text()').extract_first()
            item['houseInfo'] = i.xpath('./div/div[2]/div/a/text()').extract_first()
            item['room'] = i.xpath('./div/div[2]/div/text()[1]').extract_first()
            item['m2'] = i.xpath('./div/div[2]/div/text()[2]').extract_first().rstrip('平米')
            item['decorate'] = i.xpath('./div/div[2]/div/text()[4]').extract_first()
            item['elevator'] = i.xpath('./div/div[2]/div/text()[5]').extract_first()
            item['storey'] = i.xpath('./div/div[3]/div/text()[1]').extract_first()
            item['price'] = i.xpath('./div/div[4]/div[2]/div[1]/span/text()').extract_first()
            yield item

        next_page_url = 'https://bj.lianjia.com/ershoufang/pg{}'
        for i in range(2,101):
            req = scrapy.Request(next_page_url.format(i), callback=self.parse)
            yield req

#建表语句
# create table lianjia(
# id int primary key auto_increment,
# title varchar(40) ,
# houseInfo varchar(40),
# room varchar(20),
# m2 float,
# decorate varchar(20),
# elevator varchar(20),
# storey varchar(15),
# price int
# )default charset=utf8mb4;

 

此代码仅供学习与交流,请勿用于商业用途。

 

你可能感兴趣的:(❤️Spider进阶之路)