@[TOC]用scrapy爬虫不到数据,求大神解决
运行后终端显示:
D:\BaiduNetdiskDownload\jobui>C:/Users/admin/AppData/Local/Programs/Python/Python36-32/python.exe d:/BaiduNetdiskDownload/jobui/main.py
2020-02-07 22:29:33 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: jobui)
2020-02-07 22:29:33 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 17:26:49) [MSC v.1900 32 bit
(Intel)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d 10 Sep 2019), cryptography 2.8, Platform Windows-7-6.1.7601-SP1
2020-02-07 22:29:33 [scrapy.crawler] INFO: Overridden settings: {‘BOT_NAME’: ‘jobui’, ‘DOWNLOAD_DELAY’: 3, ‘NEWSPIDER_MODULE’: ‘jobui.spiders’, ‘SPIDER_MODULES’: [‘jobui.spiders’], ‘USER_AGENT’: ‘Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36’}
2020-02-07 22:29:33 [scrapy.extensions.telnet] INFO: Telnet Password: 1ab83f8133d075be
2020-02-07 22:29:33 [scrapy.middleware] INFO: Enabled extensions:
[‘scrapy.extensions.corestats.CoreStats’,
‘scrapy.extensions.telnet.TelnetConsole’,
‘scrapy.extensions.logstats.LogStats’]
2020-02-07 22:29:34 [scrapy.middleware] INFO: Enabled downloader middlewares:
[‘scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware’,
‘scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware’,
‘scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware’,
‘scrapy.downloadermiddlewares.useragent.UserAgentMiddleware’,
‘scrapy.downloadermiddlewares.retry.RetryMiddleware’,
‘scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware’,
‘scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware’,
‘scrapy.downloadermiddlewares.redirect.RedirectMiddleware’,
‘scrapy.downloadermiddlewares.cookies.CookiesMiddleware’,
‘scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware’,
‘scrapy.downloadermiddlewares.stats.DownloaderStats’]
2020-02-07 22:29:34 [scrapy.middleware] INFO: Enabled spider middlewares:
[‘scrapy.spidermiddlewares.httperror.HttpErrorMiddleware’,
‘scrapy.spidermiddlewares.offsite.OffsiteMiddleware’,
‘scrapy.spidermiddlewares.referer.RefererMiddleware’,
‘scrapy.spidermiddlewares.urllength.UrlLengthMiddleware’,
‘scrapy.spidermiddlewares.depth.DepthMiddleware’]
2020-02-07 22:29:35 [scrapy.middleware] INFO: Enabled item pipelines:
[‘jobui.pipelines.JobuiPipeline’]
2020-02-07 22:29:35 [scrapy.core.engine] INFO: Spider opened
2020-02-07 22:29:35 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at
0 items/min)
2020-02-07 22:29:35 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-02-07 22:29:35 [scrapy.core.engine] DEBUG: Crawled (200)
2020-02-07 22:29:35 [scrapy.core.engine] INFO: Closing spider (finished)
2020-02-07 22:29:36 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{‘downloader/request_bytes’: 300,
‘downloader/request_count’: 1,
‘downloader/request_method_count/GET’: 1,
‘downloader/response_bytes’: 6414,
‘downloader/response_count’: 1,
‘downloader/response_status_count/200’: 1,
‘elapsed_time_seconds’: 0.740043,
‘finish_reason’: ‘finished’,
‘finish_time’: datetime.datetime(2020, 2, 7, 14, 29, 36, 65906),
‘log_count/DEBUG’: 1,
‘log_count/INFO’: 10,
‘response_received_count’: 1,
‘scheduler/dequeued’: 1,
‘scheduler/dequeued/memory’: 1,
‘scheduler/enqueued’: 1,
‘scheduler/enqueued/memory’: 1,
‘start_time’: datetime.datetime(2020, 2, 7, 14, 29, 35, 325863)}
2020-02-07 22:29:36 [scrapy.core.engine] INFO: Spider closed (finished)
我的爬虫主程序:
import scrapy
import bs4
from …items import JobuiItem
class JobuiSpider(scrapy.Spider):
name = ‘jobs’
allowed_domins = [‘https://www.jobui.com’]
start_urls = [‘https://www.jobui.com/rank/company/’]
def parse(self,response):
bs = bs4.BeautifulSoup(response.text,'html.parser')
ul_list = bs.find_all('ul',class_="textlist flsty cfix")
for ul in ul_list:
a_list = ul.find_all('a')
for a in a_list:
company_id = a['href']
url = 'https://www.jobui.com{id}jobs'
real_url = url.format(id=company_id)
yield scrapy.Request(real_url,callback=self.parse_job)
def parse_job(self,response):
bs = bs4.BeautifulSoup(response.text,'html.parser')
company = bs.find(id="companyH1").text
datas = bs.find_all('div',class_="c-job-list")
for data in datas:
item = JobuiItem()
item['company'] = company
item['position'] = data.find('a').find('h3').text
spantexts = data.find_all('span')
item['address'] = spantexts[0].text
item['detail'] = spantexts[1].text
yield item
setting.py:
BOT_NAME = ‘jobui’
SPIDER_MODULES = [‘jobui.spiders’]
NEWSPIDER_MODULE = ‘jobui.spiders’
USER_AGENT =“Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)”,
“Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)”,
“Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)”,
“Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)”,
“Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6”,
“Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1”,
“Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0”,
“Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5”
ROBOTSTXT_OBEY = False
#CONCURRENT_REQUESTS = 32
DOWNLOAD_DELAY = 3
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
#COOKIES_ENABLED = False
#TELNETCONSOLE_ENABLED = False
#DEFAULT_REQUEST_HEADERS = {
#}
#SPIDER_MIDDLEWARES = {
#}
#DOWNLOADER_MIDDLEWARES = {
#}
#EXTENSIONS = {
#}
ITEM_PIPELINES = {
‘jobui.pipelines.JobuiPipeline’: 300,
}
#AUTOTHROTTLE_ENABLED = True
#AUTOTHROTTLE_START_DELAY = 5
#AUTOTHROTTLE_MAX_DELAY = 60
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
#AUTOTHROTTLE_DEBUG = False
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = ‘httpcache’
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage’
你好! 这是你第一次使用 Markdown编辑器 所展示的欢迎页。如果你想学习如何使用Markdown编辑器, 可以仔细阅读这篇文章,了解一下Markdown的基本语法知识。
我们对Markdown编辑器进行了一些功能拓展与语法支持,除了标准的Markdown编辑器功能,我们增加了如下几点新功能,帮助你用它写博客:
撤销:Ctrl/Command + Z
重做:Ctrl/Command + Y
加粗:Ctrl/Command + B
斜体:Ctrl/Command + I
标题:Ctrl/Command + Shift + H
无序列表:Ctrl/Command + Shift + U
有序列表:Ctrl/Command + Shift + O
检查列表:Ctrl/Command + Shift + C
插入代码:Ctrl/Command + Shift + K
插入链接:Ctrl/Command + Shift + L
插入图片:Ctrl/Command + Shift + G
查找:Ctrl/Command + F
替换:Ctrl/Command + G
直接输入1次#,并按下space后,将生成1级标题。
输入2次#,并按下space后,将生成2级标题。
以此类推,我们支持6级标题。有助于使用TOC
语法后生成一个完美的目录。
强调文本 强调文本
加粗文本 加粗文本
标记文本
删除文本
引用文本
H2O is是液体。
210 运算结果是 1024.
链接: link.
图片:
带尺寸的图片:
居中的图片:
居中并且带尺寸的图片:
当然,我们为了让用户更加便捷,我们增加了图片拖拽功能。
去博客设置页面,选择一款你喜欢的代码片高亮样式,下面展示同样高亮的 代码片
.
// An highlighted block
var foo = 'bar';
一个简单的表格是这么创建的:
项目 | Value |
---|---|
电脑 | $1600 |
手机 | $12 |
导管 | $1 |
使用:---------:
居中
使用:----------
居左
使用----------:
居右
第一列 | 第二列 | 第三列 |
---|---|---|
第一列文本居中 | 第二列文本居右 | 第三列文本居左 |
SmartyPants将ASCII标点字符转换为“智能”印刷标点HTML实体。例如:
TYPE | ASCII | HTML |
---|---|---|
Single backticks | 'Isn't this fun?' |
‘Isn’t this fun?’ |
Quotes | "Isn't this fun?" |
“Isn’t this fun?” |
Dashes | -- is en-dash, --- is em-dash |
– is en-dash, — is em-dash |
一个具有注脚的文本。2
Markdown将文本转换为 HTML。
您可以使用渲染LaTeX数学表达式 KaTeX:
Gamma公式展示 Γ ( n ) = ( n − 1 ) ! ∀ n ∈ N \Gamma(n) = (n-1)!\quad\forall n\in\mathbb N Γ(n)=(n−1)!∀n∈N 是通过欧拉积分
Γ ( z ) = ∫ 0 ∞ t z − 1 e − t d t . \Gamma(z) = \int_0^\infty t^{z-1}e^{-t}dt\,. Γ(z)=∫0∞tz−1e−tdt.
你可以找到更多关于的信息 LaTeX 数学表达式here.
可以使用UML图表进行渲染。 Mermaid. 例如下面产生的一个序列图:
这将产生一个流程图。:
我们依旧会支持flowchart的流程图:
如果你想尝试使用此编辑器, 你可以在此篇文章任意编辑。当你完成了一篇文章的写作, 在上方工具栏找到 文章导出 ,生成一个.md文件或者.html文件进行本地保存。
如果你想加载一篇你写过的.md文件,在上方工具栏可以选择导入功能进行对应扩展名的文件导入,
继续你的创作。
mermaid语法说明 ↩︎
注脚的解释 ↩︎