scrapy动态传参

scrapy crawl baidu -a taskname=“台北” -a bound="{“left”: 116.29203277476964,“right”: 116.318
“: 39.77001007727141,“bottom”: 39.74890812939301}” -a seed=“136.2,36.44”

class QiubaiSpider(scrapy.Spider):
    name = 'baidu'
    allowed_domains = ['www.baidu.com']
    start_urls = ['https://www.baidu.com/']
    
    task_name = "ooo"
    bound = {}
    seed = []
def __init__(self, taskname=None, bound=None, seed=None, *args, **kwargs):
    super(QiubaiSpider, self).__init__(*args, **kwargs)
    # self.start_urls = ['http://www.example.com/categories/%s' % category]
    self.task_name = taskname
    self.bound = json.loads(bound)
    self.seed =[float(i) for i in seed.split(',')]
def parse(self, response):
	pass

你可能感兴趣的:(爬虫,Python,scrapy,动态传参)