python生成器(generator)学习

通过对廖雪峰的python教程学习生成器,如下代码:

def odd():
    print('step 1')
    yield 1
    print('step 2')
    yield (3)
    print('step 3')
    yield (5)

if __name__ == "__main__":
    o = odd()
    for index in o:
        print(index)

输出内容如下:

step 1
1
step 2
3
step 3
5

generator函数,在每次调用next()的时候执行,遇到yield语句返回,再次执行时从上次返回的yield语句处继续执行。for循环就是调用next()函数,理解了这点就可以理解上述代码了。再scrapy框架爬虫中经常会遇到yield函数,

 def start_requests(self):
        self.log('------' + __name__ + ' start requests ------')
        if self.task_running is False:
            return
        apps = appinfo_mq.query_star_ids(self.market, self.country, self.start_id,
                                         self.start_index, self.keyword_count - self.start_index)
        header = CommentsSpider.headers
        # apps = ['548984223']  #文件管理器
        if apps is not None:
            log_file = open(self.log_path, 'a')
            for app in apps:
                app = app.replace('id', '')
                log_file.write(str(app) + '---')
                self.page_index[str(app)] = 1
                self.is_first[str(app)] = True
                new_url = CommentsSpider.url.format(app, 1)
                yield Request(new_url, headers=header, meta={'app_id': app})
            log_file.close()
        else:
            yield None

调用如下:

 for req in self.start_requests():
                if req is not None:
                    self.crawler.engine.crawl(req, spider=self)
                    self.no_keyword = False
                else:
                    self.task_running = False
                    self.no_keyword = True
                    timer.check_keyword_recover(self.request_action)
                    break
我们的start_requests()函数生成一个generator,通过循环逐一拿到Request()请求,
通过我们的引擎self.crawler.engine对每一个网络请求进行爬取,
Request()是scrapy内部封装的网络请求。我们在爬虫中将所有的请求放入generator,
后面通过generator来灵活处理我们的请求。

你可能感兴趣的:(python生成器(generator)学习)