python爬虫学习-scrapy爬取链家房源信息并存储

爬取链家租房页面第一页的房源信息,获取内容包括:标题、价格、URL

items.py

import scrapy


class ScrapytestItem(scrapy.Item):
    # define the fields for your item here like:
    title = scrapy.Field()
    price = scrapy.Field()
    url = scrapy.Field()

pipelines.py

import json

class ScrapytestPipeline(object):
    #打开文件
    def open_spider(self,spider):
        self.file = open('58_chuzu.txt','w',encoding='utf-8')
        print('文件被打开了')
    
    #写入文件
    def process_item(self, item, spider):
        line = '{}\n'.format(json.dumps(dict(item),ensure_ascii=False))
        self.file.write(line)
        return item
    
    #关闭文件
    def close_spider(self,spider):
        self.file.close()
        print('文件被关闭了')

spider

import scrapy
from ..items import ScrapytestItem


class SpiderCity58Spider(scrapy.Spider):
    name = 'spider_city_58'#必不可少的爬虫名字
    allowed_domains = ['lianjia.com']
    start_urls = ['https://bj.lianjia.com/zufang/']

    def parse(self, response):
        info_list = response.xpath('//*[@id="content"]/div[1]/div[1]/div')
        for i in info_list:
            item = ScrapytestItem()
            item['title'] = i.xpath('normalize-space(./div/p[1]/a/text())').extract()
            item['price'] = i.xpath('./div/span/em/text()').extract()
            url = i.xpath('./div/p[1]/a/@href').extract_first()
            item['url'] = response.urljoin(url)
            yield item

编写中遇到的几个问题:

本来打算爬58的,但是58的反爬策略我还无法破解,所以换成了链家;

链家的URL是相对地址,所以使用urljoin进行拼接,补充为绝对地址;

用normalize-space来消除开头的空格

你可能感兴趣的:(python)