阅读《Practical web scraping for data science》p161的代码之 Cannot operate on a closed database错误

问题描述

最近阅读了一本爬虫方面的书1,按照书上161页的代码原封不动的敲到电脑中,编写一个爬虫蜘蛛,但运行以后出现以下错误:

Error closing cursor
Traceback (most recent call last):
File “E:\StudyCard\BigData\WebScrape\PWSfDScode.pwsenv\lib\site-packages\sqlalchemy\engine\result.py”, line 1324, in fetchone
row = self._fetchone_impl()
File “E:\StudyCard\BigData\WebScrape\PWSfDScode.pwsenv\lib\site-packages\sqlalchemy\engine\result.py”, line 1204, in _fetchone_impl
return self.cursor.fetchone()
sqlite3.ProgrammingError: Cannot operate on a closed database.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “E:\StudyCard\BigData\WebScrape\PWSfDScode.pwsenv\lib\site-packages\sqlalchemy\engine\base.py”, line 1339, in _safe_close_cursor
cursor.close()
sqlite3.ProgrammingError: Cannot operate on a closed database.
Traceback (most recent call last):
File “E:\StudyCard\BigData\WebScrape\PWSfDScode.pwsenv\lib\site-packages\sqlalchemy\engine\result.py”, line 1324, in fetchone
row = self._fetchone_impl()
File “E:\StudyCard\BigData\WebScrape\PWSfDScode.pwsenv\lib\site-packages\sqlalchemy\engine\result.py”, line 1204, in _fetchone_impl
return self.cursor.fetchone()
sqlite3.ProgrammingError: Cannot operate on a closed database.

我记得之前看这本书的时候就遇到过该错误。该错误产生的原因应该是records库的原因。

解决方案

需要在建立数据库的代码后添加代码:

import requests
import records
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from sqlalchemy.exc import IntegrityError

db = records.Database('sqlite:///crawler_database.db')
db = db.get_connection() # 新加

代码即可正常运行。


  1. Seppe v. Broucke, Bart Baesens. Practical Web Scraping for Data Science. Apress, 2018. ↩︎

你可能感兴趣的:(Python)