Scrapy中报错"URLWarning: allowed_domains accepts only domains, not URLs."

现象

源代码如下

class HrSpider4Spider(CrawlSpider):
    """CrawlSpider类"""
    name = 'hr_spider4'
    allowed_domains = ['https://hr.tencent.com']  # 留意此处是一个完整的URL地址
    start_urls = ["https://hr.tencent.com/position.php?&start=0"]

    rules = (
        Rule(LinkExtractor(allow=r'position.php\?&start=\d+')), # 默认follow参数为True,表示继续提取
        Rule(LinkExtractor(allow=r'position_detail\.php\?id=\d+'), callback="parse_item", follow=False)
    )

    def parse_item(self, response):
        item = PositionItem()
        item['position_duty'] = response.xpath("//ul[@class='squareli']")[0].xpath(".//li/text()").extract()
        item['position_duty'] = response.xpath("//ul[@class='squareli']")[1].xpath(".//li/text()").extract()
        yield item

在运行该爬虫的时候会报错:
URLWarning: allowed_domains accepts only domains, not URLs.
原因显而易见: 允许范围接收的是范围, 而非URL地址.

解决方法

将第4行代码修改为

allowed_domains = ['hr.tencent.com'] 

也就是仅保留后缀.

你可能感兴趣的:(Scrapy中报错"URLWarning: allowed_domains accepts only domains, not URLs.")