scrapy的巨坑之注释

2020-04-24 04:56:57 [scrapy.core.scraper] ERROR: Spider error processing  (referer: http://jibing.wenyw.com/pinyin-a.shtml)
Traceback (most recent call last):
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks
    result = g.send(result)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\scrapy\core\downloader\middleware.py", line 42, in process_request
    defer.returnValue((yield download_func(request=request, spider=spider)))
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\twisted\internet\defer.py", line 1362, in returnValue
    raise _DefGen_Return(val)
twisted.internet.defer._DefGen_Return: <200 http://jibing.wenyw.com/aixiaozheng/>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\scrapy\utils\defer.py", line 55, in mustbe_deferred
    result = f(*args, **kw)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\scrapy\core\spidermw.py", line 60, in process_spider_input
    return scrape_func(response, request, spider)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\scrapy\core\scraper.py", line 148, in call_spider
    warn_on_generator_with_return_value(spider, callback)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\scrapy\utils\misc.py", line 202, in warn_on_generator_with_return_value
    if is_generator_with_return_value(callable):
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\scrapy\utils\misc.py", line 187, in is_generator_with_return_value
    tree = ast.parse(dedent(inspect.getsource(callable)))
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\ast.py", line 35, in parse
    return compile(source, filename, mode, PyCF_ONLY_AST)
  File "", line 1
    def parser_disease(self, response):
    ^

报错如上,代码如下

# -----------------------------------------------------------------------------------------
        details = response.xpath('//ul[@class="submenu01c"]/li/a/@href').extract()
        SITE_URL = 'http://jibing.wenyw.com'
        details_url = []
        for detail in details:
            details_url.append(SITE_URL + detail)
        print(details_url)
        response = urllib.request.urlopen(details_url[0])  # 发出请求并且接收返回文本对象
        html = response.read()  # 调用read()进行读取
        root = etree.HTML(html)
        cont = (''.join(root.xpath('//div[@class="detailc"]//p/text()')))
        print(cont)
        names = ['description','description2']
        item[names[0]] = ''.join(cont)
        print(item)
        time.sleep(1)

结果是

结果是

结果是

结果是 删掉# -------------

它就好了?????曹,花了我四个小时

你可能感兴趣的:(学习笔记)