===============================================================
scrapy爬虫框架
===============================================================
1.scrapy-project: itcast (爬虫中不使用yield,即不启用pipeline)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.创建项目---- scrapy startproject itcast
| itcast/
| ├── scrapy.cfg
| └── itcast
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── chuanzhi.py
|
| 2.明确目标---- vim items.py
| vim items.py
| import scrapy
|
| class ItcastItem(scrapy.Item): # 创建item模型类,在其中制定要爬取的目标数据
| name = scrapy.Field()
| level = scrapy.Field()
| info = scrapy.Field()
|
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider chuanzhi "itcast.cn" # 注意生成爬虫时,爬虫名不能和项目名称相同,必须设置爬虫名和爬虫域
| (2)设置爬虫--- vim chuanzhi.py
| vim chuanzhi.py
| import scrapy
| from chuanzhi.items import ItcastItem
|
| class ChuanzhiSpider(scrapy.Spider):
| name = "chuanzhi"
| allowed_domains = ["itcast.cn"]
| start_urls=["http://www.itcast.cn/",]
|
| def parse(self,response):
| items = []
| for each in response.xpath("//div[@class='li_txt']"):
| item = ItcastItem() # 实例化items.py中定义的ItcastItem()类---注意爬虫开头需要从chuanzhi.items引入ItcastItem模块
| item['name']=each.xpath("h3/text()").extract()[0] # extract()函数返回的是Unicode字符串
| item['level']=each.xpath("h4/text()").extract()[0]
| item['info']=each.xpath("p/text()").extract()[0]
| items.append(item)
| return items # 使用return不会将数据交给pipeline,使用yield在for循环中则会将每次循环处理后的结果交给pipeline处理
|
| 4.执行爬虫--- scrapy crawl chuanzhi # 执行爬虫时注意爬虫名,-o选项可将爬虫返回结果保存到指定格式文件
|
| scrapy保存信息最简单的方式有四种:
| scrapy crawl itcast -o teachers.json # 保存为json格式文件,默认为Unicode编码
| scrapy crawl itcast -o teachers.jsonlines # 保存为jsonline格式文件,默认为Unicode编码
| scrapy crawl itcast -o teachers.csv # 保存为csv逗号表达式,可用Excel打开
| scrapy crawl itcast -o teachers.xml # 保存为xml格式文件
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
2.scrapy-project: itcast (爬虫中使用yield,即启用pipeline)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.创建项目---- scrapy startproject itcast
| itcast/
| ├── scrapy.cfg
| └── itcast
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── chuanzhi.py
|
| 2.明确目标--- vim items.py
| vim items.py
| import scrapy
|
| class ItcastItem(scrapy.Items):
| name = scrapy.Field()
| level = scrapy.Field()
| info = scrapy.Field()
|
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider chuanzhi "itcast.cn"
| (2)设置爬虫--- vim chuanzhi.py
| vim chuanzhi.py
| import scrapy
| from itcast.items import ItcastItem
|
| class ChuanzhiSpider(scrapy.Spider):
| name = "chuanzhi"
| allowed_domains = ["itcast.cn"]
| start_urls = ["http://www.itcast.cn/",]
|
| def parse(self,response):
| for each in response.xpath("//div[@class='li_txt']")
| item = ItcastItem()
| item['name'] = each.xpath('h3/text()').extract()[0]
| item['leve'] = each.xpath('h4/text()').extract()[0]
| item['info'] = each.xpath('p/text()').extract()[0]
| yield item # 使用yield将每次循环的结果item交给pipeline处理
|
| 4.编写item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
|
| class ItcastJsonPipeline(object): # 必须定义一个pipeline类去处理爬虫返回的数据,此类中必须定义process_item()函数
| def __init__(self): # 重新定义__init__()方法()可选
| self.filename = 'teachers.json'
| def open_spider(self,spider): # open_spider()方法(可选),必须有spider参数,spider启动时该open_spider()方法被调用
| self.f = open(self.filename,"wb")
| def process_item(self,item,spider): # process_item()方法必须实现,必须有yield传入的item参数和spider参数
| content = json.dumps(dict(item),ensure_ascii=False) + ",\n"
| self.f.write(content.encode('utf-8'))
| return item
| def close_spider(self,spider): # close_spider()方法(可选),必须有spider参数,spider结束时该close_spider()方法被调用
| self.f.close()
|
| 5.启用上述pipeline组件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"itcast.pipelines.ItcastJsonPipeline":300}
|
| 6.执行爬虫--- scrapy crawl chuanzhi # 会在当前执行目录下生成一个teachers.json的文件
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
3.scrapy-project: tencent (腾讯招聘scrapy.Spider版本)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.创建项目---- scrapy startproject tencent
| tencent/
| ├── scrapy.cfg
| └── tencent
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── tt.py
|
| 2.明确目标--- vim items.py
| vim items.py
| import scrapy
|
| class TencentItem(scrapy.Item):
| name=scrapy.Field()
| detail_link = scrapy.Field()
| position_info = scrapy.Field()
| people_number = scrapy.Field()
| work_location = scrapy.Field()
| publish_time = scrapy.Field()
|
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider tt "tencent.com"
| (2)设置爬虫--- vim tt.py
| vim tt.py
| import scrapy
| import re
| from tencent.items import TencentItem
|
| class TtSpider(scrapy.Spider):
| name = "tt"
| allowed_domains = ["tencent.com"]
| start_urls = ["http://hr.tencent.com/position.php?&start=0#a"]
|
| def parse(self,response):
| for each in xpath('//*[@class="even"]'):
| item = TencentItem()
| item['name']=each.xpath('./td[1]/a/text()').extract()[0].encoding('utf-8')
| item['detail_link']=each.xpath('./td[1]/a/@href').extract()[0].encoding('utf-8')
| item['position_info']=each.xpath('./td[2]/a/text()').extract()[0].encoding('utf-8')
| item['people_number']=each.xpath('./td[3]/a/text()').extract()[0].encoding('utf-8')
| item['work_location']=each.xpath('./td[4]/a/text()').extract()[0].encoding('utf-8')
| item['publish_time']=each.xpath('./td[5]/a/text()').extract()[0].encoding('utf-8')
| current_page = re.search('\d+',response.url).group(1) # 取出当前页面URL中匹配出来的第一个数字(即当前页的页码)
| next_page = int(current_page) + 10 # 下一页的页码 = 当前页码 + 10
| next_url = re.sub('\d+',str(next_page),response.url) # 把当前页面URL中的数字替换为下一页的页面,即可得到下一页的URL
| yield scrapy.Request(next_url,callback=self.parse) # 使用yield函数,调用scrapy.Request()方法将下页URL发送到请求队列,并制定回调函数为parse处理下一页返回页面
| yield item # 使用yield函数,将本次循环获取的item数据交给pipeline处理
|
| 4.编写item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
|
| class TencentJsonPipeline(object):
| def __init__(self):
| self.filename = "tencent.json"
| def open_spider(self,spider):
| self.f = open(self.filename,"wb")
| def process_item(self,item,spider):
| content = json.dumps(dict(item),ensure_ascii=False) + ",\n"
| self.f.write(content)
| retrun item
| def close_spider(self,spider):
| self.f.close()
|
| 5.启用上述pipeline组件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES={"tencent.pipelines.TencentJsonPipeline":300}
|
| 6.执行爬虫--- scrapy crawl tt # 执行爬虫会在当前目录下生成tencent.json文件
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
4.scrapy-project: tencent (腾讯招聘CrawlSpider版本)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.创建项目---- scrapy startproject tencent
| tencent/
| ├── scrapy.cfg
| └── tencent
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── tt.py
| 2.明确目标--- vim items.py
| vim items.py
| import scrapy
|
| class TencentItem(scrapy.Item):
| name=scrapy.Field()
| detail_link = scrapy.Field()
| position_info = scrapy.Field()
| people_number = scrapy.Field()
| work_location = scrapy.Field()
| publish_time = scrapy.Field()
|
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider -t crawl tt "tencent.com" # -t 指定模板是CrawlSpider
| (2)设置爬虫--- vim tt.spider
| vim tt.spider
| import scrapy
| from scrapy.spiders import CrawlSpider,Rule # CrawlSpider版本的scrapy会引入CrawlSpider/Rule模块
| from scrapy.linkextractor import LinkExtractor # 提取链接还需要引入LinkExtractor模块
| from tencent.items import TencentItem # 还需要引入自定义的Item
| class TtSpider(CrawlSpider):
| name = "tt"
| allowed_domains = ["tencent.com"]
| start_urls = ["http://hr.tencent.com/position.php?&start=0#a"]
| page_link = LinkExtractor(allow=('start=\d+')) # 使用LinkExtractor()自动获取匹配到的链接(匹配包含"start=数字"的链接)
| rules = [
| Rule(page_link,callback='parse_tencent',follow=Ture) # 使用Rule()自动发送匹配到的页面链接到请求队列,并指定回调函数parse_tencent()处理该请求响应,follow=True会跟进提取处理
| ] # 可以写多个Rule(),匹配不同的链接并制定不同的回调函数从而使用不同的处理方法
| def parse_tencent(self,respone):
| for each in response.xpath('//tr[@class="even"]|//tr[@class="odds"]'):
| item = TencentItem()
| item['name']=each.xpath('./td[1]/a/text()').extract()[0].encoding('utf-8')
| item['detail_link']=each.xpath('./td[1]/a/@href').extract()[0].encoding('utf-8')
| item['position_info']=each.xpath('./td[2]/a/text()').extract()[0].encoding('utf-8')
| item['people_number']=each.xpath('./td[3]/a/text()').extract()[0].encoding('utf-8')
| item['work_location']=each.xpath('./td[4]/a/text()').extract()[0].encoding('utf-8')
| item['publish_time']=each.xpath('./td[5]/a/text()').extract()[0].encoding('utf-8')
| yield item
| # 使用CrawlSpider类后,这里都不需要自己去提取/拼接下页URL,再发送新链接请求/制定回调函数处理,而在上述LinkExtractor()和Rule()的协同作用下就完成了URL提取和链接请求发送跟进处理的全过程
| 4.编写item pipeline--- vimpipelines.py
| vim pipelines.py
| import json
|
| class TencentJsonPipeline(object):
| def __init__(self):
| self.filename = 'tencent.json'
| def open_spider(self,spider):
| self.f = open(self.filename,"w")
| def process_item(self,item,spider):
| content = json.dumps(dict(item),ensure_ascii=False) + ',\n'
| self.f.write(content)
| return item
| def close_spier(self,spider):
| self.f.close()
|
| 5.启动上述pipeline--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"tencent.pipelines.TencentJsonPipeline":300}
|
| 6.执行爬虫--- scrapy crawl tt
|
| # 注意:scrapy.Spider类和CrawlSpider类的上述区别!!!!!
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
5.scrapy-project: dongguan (东莞阳关问政CrawlSpider版本---多Rule)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.创建项目--- scrapy startproject dongguan
| dongguan/
| ├── scrapy.cfg
| └── dongguan
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── sun.py
|
| 2.明确目标--- vim items.py
| vim items.py
| import scrapy
|
| class DongguanItem(scrapy.Item):
| title = scrapy.Field()
| content = scrapy.Field()
| url = scrapy.Field()
| number = scrapy.Field()
|
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider -t crawl sun "wz.sun0769.com"
| (2)设置爬虫--- vim sun.py
| vim sun.py
| import scrapy
| from scrapy.spider import CrawlSpider,Rule
| from scrapy.linkextractor import LinkExtractor
| from dongguan.items import DongguanItem
| class SunSpider(CrawlSpider):
| name = "sun"
| allowed_domains = ["wz.sun0769.com"]
| start_urls = ["http://wz.sun0769.com/index.php/question/questionType?type=4&page=0"]
| rules = [ # 注意:不写callback/不写follow---follow默认为True跟进; 写callback/不写follow---follow默认为False不跟进
| Rule(LinkExtractor(allow=r'type=4&page=\d+'),follow=Ture) # 第一个Rule,匹配每一页,持续跟进,没有回调函数
| Rule(LinkExtractor(allow=r'/html/question/\d+/\d+.shtml'),callback='parse_item') # 第二个Rule,匹配每个子页,并使用回调函数parse_item()处理响应,不跟进
| ]
| def parse_item(self,response):
| item=DongguanItem()
| item['title'] = response.xpath('//div[contains(@class,"pagecenter p3")]//strong/text()').extract()[0]
| item['number'] = item['title'].split(' ').[-1].split(':')[-1] # 从title中取出数字
| item['content'] = response.xpath('//div[@class="c1 text14_2"]/text()').extract()[0]
| item['url'] = response.url
| yield item
|
| 4.编写item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
| class DongguanJsonPipeline(object):
| def __init__(self):
| self.f = open("dongguan.json","w")
| def process_item(self,item,spider):
| text = json.dumps(dict(item),ensure_ascii=False) + ',\n'
| self.f.write(text)
| def close_spider(self,spider):
| self.f.close()
|
| 5.启用上述pipeline组件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"dongguan.pipelines.DongguanJsonPipeline":300}
|
| 6.执行爬虫--- scrapy crawl sun
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
6.scrapy-project: dongguan (东莞阳关问政CrawlSpider反爬虫版本)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.创建项目--- scrapy startproject dongguan
| dongguan/
| ├── scrapy.cfg
| └── dongguan
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── new_dongguan.py
|
| 2.明确目标--- vim items.py
| vim items.py
| import scrapy
|
| class DongguanItem(scrapy.Item):
| title = scrapy.Field()
| content = scrapy.Field()
| url = scrapy.Field()
| number = scrapy.Field()
|
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider -t crawl new_dongguan "wz.sun0769.com"
| (2)设置爬虫--- vim new_dongguan.py
| vim new_dongguan.py
| import scrapy
| from scrapy.spider import CrawlSpider,Rule
| from scrapy.linkextractor import LinkExtractor
| from dongguan.items import DongguanItem
| class New_dongguanSpider(CrawlSpider):
| name = "new_dongguan"
| allowed_domains = ["wz.sun0769.com"]
| start_urls = ["http://wz.sun0769.com/index.php/question/questionType?type=4&page=0"]
| page_link = LinkExtractor(allow=("type=4")) # 获取页面URl
| content_link = LinkExtractor(allow=r'/html/question/\d+/\d+.shtml') # 获取帖子URL
| rules= [
| Rule(page_link,process_links='deal_links'), # 第一个Rule,匹配每一页URL,使用pcess_links参数,指定deal_links函数处理该URL列表
| Rule(content_link,callback='parse_item') # 第二个Rule,匹配每个帖子页URL,并使用回调函数parse_item处理页面响应(有callback/没follow,默认follow=False)
| ]
| def deal_links(self,links):
| for each in links:
| each.url = each.url.replace("?","&").replace("Type&","Type?")
| return links # 逐一修改每个URL,最后返回修改后的URL列表
| def parse_item(self,response):
| item=DongguanItem()
| item['title'] = response.xpath('//div[contains(@class,"pagecenter p3")]//strong/text()').extract()[0]
| item['number'] = item['title'].split(' ').[-1].split(':')[-1] # 从title中取出数字
| #item['content'] = response.xpath('//div[@class="c1 text14_2"]/text()').extract()[0] 这种情况只能爬取没有图片的文本(可以进行代码优化如下:)
| content = response.xpath('//div[@class="contentext"]/text()').extract() # 匹配有图片时的文本内容
| if len(content) == 0: # 内容为空,此时无图片,则按以下规则匹配文本内容
| content = response.xpath('//div[@class="c1 text14_2"]/text()').extract() # 匹配无图片时的文本内容
| item['content'] = "".join(content).strip() # 使用非空对各段文本进行拼接,并去掉尾部空格
| else:
| item['content'] = "".join(content).strip() # 使用非空对各段文本进行拼接,并去掉尾部空格
| item['url'] = response.url
| yield item
|
| 4.编写item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
| import codecs
| class New_gongguanJsonPipeline(object):
| def __init__(self):
| self.f = codecs.open("new_dongguan.json","w",encoding="utf-8")
| def process_item(self,item,spider):
| text = json.dumps(dict(item),ensure_ascii=False) + ',\n'
| self.f.write(text)
| def close_spider(self,spider):
| self.f.close()
|
| 5.启用上述pipeline组件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"dongguan.pipelines.New_dongguanJsonPipeline":300}
|
| 6.执行爬虫--- scrapy crawl new_dongguan
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
7.scrapy-project: dongguan (东莞阳关问政CrawlSpider版本--->改写为Spider版本)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.创建项目--- scrapy startproject dongguan
| dongguan/
| ├── scrapy.cfg
| └── dongguan
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── xixi.py
|
| 2.明确目标--- vim items.py
| vim items.py
| import scrapy
|
| class DongguanItem(scrapy.Item):
| title = scrapy.Field()
| content = scrapy.Field()
| url = scrapy.Field()
| number = scrapy.Field()
|
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider xixi "wz.sun0769.com"
| (2)设置爬虫--- vim xixi.py
| vim xixi.py
| import scrapy
| from dongguan.items import DongguanItem
|
| class XixiSpider(scrapy.Spider):
| name = "xixi"
| allowed_domains = ["wz.sun0769.com"]
| url = "http://wz.sun0769.com/index.php/question/questionType?type=4&page="
| offset = 0
| start_urls = [url + str(offset)]
|
| def parse(self,response):
| tiezi_link_list = response.xpath('//div[class="grepframe"]/table//td/a[@class="news14"]/@href').extract()
| for tiezi_link in tiezi_link_list: # for循环提取出帖子的连接,并通过yield函数调用scrapy.Reuqest()方法将帖子请求发送到请求队列,返回的响应使用回调函数parse_item()处理
| yield scrapy.Request(tiezi_link,callback=self.parse_item)
| if self.offset <= 71160:
| self.offset +=30 # 主页自增30,即生成下一页的URL,并通过yield函数调用scrapy.Request()方法将下页请求发送到请求队列,返回的响应使用回调函数parse()处理
| yield scrapy.Request(self.url+str(offset),callback=self.parse)
|
| def parse_item(self,response):
| item=DongguanItem()
| item['title'] = response.xpath('//div[contains(@class,"pagecenter p3")]//strong/text()').extract()[0]
| item['number'] = item['title'].split(' ').[-1].split(':')[-1] # 从title中取出数字
| #item['content'] = response.xpath('//div[@class="c1 text14_2"]/text()').extract()[0] 这种情况只能爬取没有图片的文本(可以进行代码优化如下:)
| content = response.xpath('//div[@class="contentext"]/text()').extract() # 匹配有图片时的文本内容
| if len(content) == 0: # 内容为空,此时无图片,则按以下规则匹配文本内容
| content = response.xpath('//div[@class="c1 text14_2"]/text()').extract() # 匹配无图片时的文本内容
| item['content'] = "".join(content).strip() # 使用非空对各段文本进行拼接,并去掉尾部空格
| else:
| item['content'] = "".join(content).strip() # 使用非空对各段文本进行拼接,并去掉尾部空格
| item['url'] = response.url
| yield item
|
| 4.编写item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
| import codecs
| class XixiJsonPipeline(object):
| def __init__(self):
| self.f = codecs.open("Xixi.json","w",encoding="utf-8")
| def process_item(self,item,spider):
| text = json.dumps(dict(item),ensure_ascii=False) + ',\n'
| self.f.write(text)
| def close_spider(self,spider):
| self.f.close()
|
| 5.启用上述pipeline组件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"dongguan.pipelines.XixiJsonPipeline":300}
|
| 6.执行爬虫--- scrapy crawl xixi
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
8.scrapy-project: renren (scrapy框架模拟登陆人人网三种方式----利用yield scrapy.FormRequest(url,formdata,callback)发送带信息的post请求)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 方法一:最麻烦的方法,使用fiddler获取成功登陆后的所有cookie信息,然后将这些信息拿过来全部post,成功率100%
| yield scrapy.FormRequest(url,cookies=fidder获取,callback)
|
| 方法二:那些仅仅需要提供post数据的,可以采用这种方法
| yield scrapy.FormRequest(url,formdata=仅需填写post的数据,callback)
|
| 方法三:正统的scrapy模拟登陆方法,首先发送登陆页面请求,获取到登陆页面的必要参数(如_xsrf),然后和账户密码信息一起post到服务器(其它先关信息默认也被返回),登陆成功
| yield scrapy.FormRequest.from_response(url,formdata=需填写的post数据+所需获取的参数,callback)
|
| 1.创建项目---- scrapy startproject renren
| renren/
| ├── scrapy.cfg
| └── renren
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── renren1/renren2/renren3.py
|
| 2.明确目标--- vim items.py (这里在spider中直接保存数据,所有省略此步)
|
| 3.制作爬虫
| ****************************************************************************************************************
| 方法一:最麻烦的方法,使用fiddler获取成功登陆后的所有cookie信息,然后将这些信息拿过来全部post,成功率100%
| yield scrapy.FormRequest(url,cookies=fidder获取,callback)
| (1)生成爬虫--- scrapy genspider renren1 "renren.com"
| (2)设置爬虫--- vim renren1.py
| vim renren1.py
| import scrapy
|
| class Renren1Spider(scrapy.Spider):
| name = "renren1"
| allowed_domains = ["renren.com"]
| access_urls = ( # 注意这些并不是真正的start_url,而是模拟登陆成功后才能访问的好友主页的列表!!
| "http://www.renren.com/54323456/profile",
| "http://www.renren.com/54334456/profile",
| "http://www.renren.com/54366456/profile"
| )
| cookies = { # 这些是fildder抓取的成功登陆的cookie信息,全部copy到这里,一会儿带着这些信息去登陆
| "anonymid" : "ixrna3fysufnwv",
| "_r01_" : "1",
| "ap" : "327550029",
| "JSESSIONID" : "abciwg61A_RvtaRS3GjOv",
| "depovince" : "GW",
| "springskin" : "set",
| "jebe_key" : "f6fb270b-d06d-42e6-8b53-e67c3156aa7e%7Cc13c37f53bca9e1e7132d4b58ce00fa3%7C1484060607478%7C1%7C1486198628950",
| "t" : "691808127750a83d33704a565d8340ae9",
| "societyguester" : "691808127750a83d33704a565d8340ae9",
| "id" : "327550029",
| "xnsid" : "f42b25cf",
| "loginfrom" : "syshome"
| }
|
| def start_request(self): # 希望程序一开始执行就发送post请求,需要重写start_request()方法,并且它不再调用start_urls里的url
| for url in access_urls: # 通过for循环去访问那些成功登陆后才能访问的好友主页,去访问的时候post带上已填入的相关cookie信息,最后使用parse_page()回调函数处理响应
| yield scrapy.FormRequest(url,cookies=self.cookies,callback=self.parse_page)
|
| def parse_page(self,response):
| print "======" + str(response.url) + "======"
| with open("renren1.html","w") as f:
| f.write(response.body)
|
| ****************************************************************************************************************
| 方法二:那些仅仅需要提供post数据的,可以采用这种方法
| yield scrapy.FormRequest(url,formdata=仅需填写post的数据,callback)
| (1)生成爬虫--- scrapy genspider renren2 "renren.com"
| (2)设置爬虫--- vim renren2.py
| vim renren2.py
| import scrapy
|
| class Renren2Spider(scrapy.Spider):
| name = "renren2"
| allowed_domains = ["renren.com"]
|
| def start_request(self): # 希望程序一开始执行就发送post请求,需要重写start_request()方法,并且它不再调用start_urls里的url
| url = "http://www.renren.com/PLogin.do" # 这里没有其他多余的信息,只需要填写那些post的数据信息(如这里的用户名和密码)
| yield scrapy.FormRequest(url=url,formdata={"email":"[email protected]","password":"alarmachine"},callback=self.parse_page)
|
| def parse_page(self,response):
| with open("renren2.html","w") as f:
| f.write(response.body)
|
| ****************************************************************************************************************
| 方法三:正统的scrapy模拟登陆方法,首先发送登陆页面请求,获取到登陆页面的必要参数(如_xsrf),然后和账户密码信息一起post到服务器(其它先关信息默认也被返回),登陆成功
| yield scrapy.FormRequest.from_response(response,formdata={需填写的post数据+所需获取的参数},callback)
|
| (1)生成爬虫--- scrapy genspider renren3 "tencent.com"
| (2)设置爬虫--- vim tt.py
| vim renren3.py
| import scrapy
|
| class Renren3Spider(scrapy.Spider):
| name = "renren3"
| allowed_domains=["renren.com"]
| start_urls = ["http://www.renren.com/PLogin.do"]
|
| def parse(self,response):
| _xsrf = response.xpath('//div[@class="...."].....') # 从response中获取必要参数,例如_xsrf等,这里的response是"http://www.renren.com/PLogin.do"
| yield scrapy.FormRequest.from_response(response,formdata={"email":"[email protected]","password":"123456",_xsrf=_xsrf,.....},callback=self.parse_page)
| # 这里首先会去start_url获取登陆页面的相关信息,然后在这里将用户名/密码/必要参数等信息连同登陆页面相关信息一起发送新的登陆请求,登陆成功的返回页面响应采用parse_page()回调函数处理
|
| def parse_page(self,response):
| print "===== 1 =====" + str(response.url)
| url = "http://www.renren.com/4234553/profile" # 在该函数中:附带成功登陆的相关页面信息,去访问好友主页,并调用parse_new_page()回调函数处理
| yield scrapy.Request(url,callback=self.parse_new_page)
|
| def parse_new_page(self,response):
| print "===== 2 =====" + str(response.url)
| with open("renren3.html","w") as f:
| f.write(response.body)
|
|
| 4.编写item pipeline(省略)
| 5.启用上述pipeline组件(省略)
| 6.执行爬虫--- scrapy crawl renren1/renren2/renren3 # 执行爬虫会在当前目录下生成renren1.html/renren2.html/renren3.html文件
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
9.scrapy-project: renren (scrapy框架模拟登陆知乎网----CrawlSpider+正统模拟登陆方法(利用yield scrapy.FormRequest.from_response(response,formdata,callback)发送带信息的post请求))
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 1.创建项目--- scrapy startproject zhihu
| zhihu/
| ├── scrapy.cfg
| └── zhihu
| ├── __init__.py
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| └── spiders
| ├── __init__.py
| └── zh.py
|
| 2.明确目标--- vim items.py
| vim items.py
| import scrapy
|
| class ZhihuItem(scrapy.Item):
| url = scrapy.Field()
| title = scrapy.Field()
| description = scrapy.Field()
| answer = scrapy.Field()
| name = scrapy.Field()
|
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider -t crawl zh "www.zhihu.com"
| (2)设置爬虫--- vim zh.py
| vim zh.py
| from scrapy import Selector
| from scrapy import CrawlSpider,Rule
| from scrapy import LinkExtractor
| from zhihu.items import ZhihuItem
|
| class ZhSpider(CrawlSpider):
| name = "zh"
| allowed_domains = ["www.zhihu.com"]
| start_urls = ["http://www.zhihu.com"]
| rules = [ Rule(LinkExtractor(allow=('/question/\d+#.*?',)),callback='parse_page',follow=True),
| Rule(LinkExtractor(allow=('/question/\d+',)),callback='parse_page',follow=True),
| ]
| headers = {
| "Accept": "*/*",
| "Accept-Encoding": "gzip,deflate",
| "Accept-Language": "en-US,en;q=0.8,zh-TW;q=0.6,zh;q=0.4",
| "Connection": "keep-alive",
| "Content-Type":" application/x-www-form-urlencoded; charset=UTF-8",
| "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari/537.36",
| "Referer": "http://www.zhihu.com/"
| }
| def start_request(self): # 重写start_request方法,在第一次发送请求的时候附带上meta信息,返回的响应交给post_login()处理
| return [scrapy.Reuqest("http://www.zhihu.com/login",meta={"cookiejar":1},callback=self.post_login)]
|
| def post_login(self,response):
| print "-------preparing login---------"
| xsrf = Selector(response).xpath('//input[@name="_xsrf"]/@value').extract()[0]
| return [ scrapy.FormRequest.from_response( response, # 这里的response是"http://www.zhihu.com/login"
| meta = {'cookiejar' : response.meta['cookiejar']},
| headers = self.headers, # 注意此处的headers
| formdata = {
| '_xsrf': xsrf,
| 'email': '[email protected]', # 填上要发送的账户/密码/必要参数
| 'password': '123456'
| },
| callback = self.after_login, # 重新发送post请求后返回的成功登陆页交给after_login()处理
| dont_filter = True
| ) ]
| def after_login(self,response):
| for url in self.start_urls:
| yield self.make_requests_from_url(url)
| # 登陆成功后,重新发送request请求获取知乎首页,然后根据Rule获取相关问题URL链接,接着发送这些问题URL的链接,返回的响应交给parse_page()处理
| def parse_page(self,response):
| problem = Selector(response)
| item = ZhihuItem()
| item['url'] = response.url
| item['title'] = problem.xpath('//h2[@class="zm-item-title zm-editable-content"]/text()').extract()
| item['description'] = problem.xpath('//div[@class="zm-editable-content"]/text()').extract()
| item['answer'] = problem.xpath('//div[@class="zm-editable-content clearfix"]/text()').extract()
| item['name'] = problem.xpath('//span[@class="name"]/text()').extract()
| yield item
|
| 4.编写item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
| import codecs
|
| class ZhihuJsonPipeline(object):
| def __init__(self):
| self.f = codecs.open("zhiju.json","w",encoding='utf-8')
| def process_item(self,item,spider):
| text = json.dumps(dict(item),ensure_ascii=False) + ",\n"
| self.f.write(text)
| return item
| def close_spider(self,spider):
| self.f.close()
|
| 5.启动上述pipeline--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"zhihu.pipelines.ZhihuJsonPipeline":300}
| DOWNLOAD_DELAY = 0.25
|
| 6.执行爬虫--- scrapy crawl zh # 执行成功会在执行目录下生成zhuhu.json文件
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
10.scrapy-project: douban (scrapy框架爬取豆瓣电影top250并存入MongoDB----scrapy.Spider)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 1.创建项目--- scrapy startproject douban
| douban/
| ├── scrapy.cfg
| └── douban
| ├── __init__.py
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| └── spiders
| ├── __init__.py
| └── db.py
|
| 2.明确目标--- vim items.py
| vim items.py
| import scrapy
|
| class DoubanItem(scrapy.Item):
| title = scrapy.Field()
| info = scrapy.Field()
| stars = scrapy.Field()
| introduce = scrapy.Field()
|
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider db "movie.douban.com"
| (2)设置爬虫--- vim db.py
| vim db.py
| import scrapy
| from douban.items import DoubanItem
|
| class DbSpider(object):
| name = 'db'
| allowed_domains = ["movie.douban.com"]
| url = "http://movie.douban.com/top250?strat="
| offset = 0
| start_urls = (url + str(offset))
|
| def parse(self,response):
| item = DoubanItem()
| movies = response.xpath('//div[@class="info"]')
| for movie in movies:
| item['title'] = movie.xpath('.//span[@class="title"][1]/text()').extract()[0]
| item['info'] = movie.xpath('.//div[@class="bd"]/p/text()').extract()[0]
| item['stars'] = movie.xpath('.//div[@class="star"]/span[@class="rating_num"]/text()').extract()[0]
| introduce = movie.xpath('.//p[@class="quote"]/span/text()').extract()
| if len(introduce) != 0:
| item['introduce'] = introduce[0] # 忽略没有introduce的条目
| yield item
| if self.offset <225:
| self.offset += 25
| yield scrapy.Request(self.url+str(self.offset),callback=self.parse) # 生成下一页的URL,并发送到请求队列,调用parse()方法处理其响应
|
| 4.编写item pipeline--- vim pipelines.py
| vim pipelines.py
| import pymongo
| from scrapy.conf import settings # 引入settings文件
|
| class DoubanMongoPipeline(object):
| def __init__(self):
| mongo_host = settings['MONGODB_HOST'] # 从settings模块中获取MongoDb的主机地址
| mongo_port = settings['MONGODB_PORT'] # 从settings模块中获取MongoDB的端口
| db_name = settings['MONGODB_NAME'] # 从settings模块中获取要连接MongoDB的数据库名
| sheet_name = settings['MONGODB_SHEETNAME'] # 从settings模块中获取要连接MongoDB的表名
| mongocli = pymongo.MongoClient(host=mongo_host,port= mongo_port) # 创建一个MongoDB连接对象
| mydb = mongocli[db_name] # 指定要连接的数据库
| self.sheet = mydb[sheet_name] # 指定要连接到该数据库中的表名
|
| def process_item(self,item,spider):
| self.sheet.insert(dict(item)) # 调用已建立好的数据表对象,向其中插入字典类型的item对象
| return item
|
| 5.启用上述pipeline组件并设置MongoDB相关参数--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"douban.pipelines.DoubanMongoPipeline":300}
| MONGODB_HOST = "127.0.0.1"
| MONGODB_PORT = "27017" # 手写入MongoDB的主机IP/端口/数据库名/表名
| MONGODB_NAME = "Douban"
| MONGODB_SHEETNAME = "doubanmovies"
| User-Agent = "Mozilla/5.0...." # 设置默认的User-Agent
| #ROBOTTXT_OBEY = TRUE # 禁用机器人协议
|
| 6.执行爬虫--- scrapy crawl db
|
| 7.附录:MongoDB常见操作
| mongod # 启动mongodb
| mongo # 启动客户端登陆mongodb
| db # 查看当前数据库
| show dbs # 查看所有数据库
| user xxx # 切换到某个数据库
| show collections # 查看当前数据库的所有表
| db.yyy.find() # 查看yyy表里的数据
| db.dropdatabase() # 删除当前数据库
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
11.scrapy-project: douban (scrapy框架爬豆瓣电影top250并存入MongoDB,启用自定义的代理中间件和User-Agent中间件----scrapy.Spider)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.创建项目--- scrapy startproject douban (同上)
| 2.明确目标--- vim items.py (同上)
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider db "movie.douban.com" (同上)
| (2)设置爬虫--- vim db.py (同上)
| 4.编写item pipeline--- vim pipelines.py (同上)
|
| 5.编写中间件--- vim middlewares.py
| vim middlewares.py
| import random # 引入随机函数和base64加密
| import base64
| from settings import USER_AGENT,PROXIES # 从settings.py中引入USER_AGENT和PROXIES
|
| class Random_User_Agent(object): # 定义一个随机User-Agent中间件
| def process_request(self,request,spider): # 定义中间件必须要重写process_request()方法,这样才能修改请求中附带的信息
| user_agent = random.choice(USER_AGENT) # 从USER_AGENT对应的User-Agent列表中随机选出一个User-Agent
| request.headers.setdefault("User-Agent",user_agent) # 将获取到的随机代理的值设置成请求报头中默认的User-Agent
|
| class Random_Proxy(object): # 定义一个随机代理中间件
| def process_request(self,request,spider): # 定义中间件必须要重写process_request()方法,这样才能修改请求中附带的信息
| proxy = random.choice(PROXIES) # 从PROXIES对应的代理列表中随机选出一个代理
| if proxy['user_passwd'] id None: # 如果选出的代理没有用户名/密码,则是公用代理,直接调用request.meta['proxy']属性设置其为代理
| request.meta['proxy'] = "http://" + proxy['ip_port']
| else: # 如果选出的代理有用户名/密码,则是私密代理,需要先用base64加密用户名/密码,然后验证其正确性,验证通过后调用request.meta['proxy']属性设置其为代理
| base64ed_passwd = bash64.b64encode(proxy['user_passwd']) # 将用户名/密码进行base64加密
| request.headers['proxy_Anthorization'] = 'Basic' + base64ed_passwd # 通过调用request.headers['proxy_Anthorization']属性,对已base64加密后的用户名/密码进行Basic基本验证
| request.meta['proxy'] = "http://" + proxy['ip_port'] # 上述对用户名/密码的基本验证通过后调用request.meta['proxy']属性设置其为代理
|
| 5.启用上述pipeline组件--启用上述中间件组件-设置MongoDB相关参数-设置中间件相关参数--- vim settings.py
| vim settings.py
| #-----此部分启用pipeline组件和设置mongodb相关参数(同上)-----------------------------------------------------------------
| ITEM_PIPELINES = {"douban.pipelines.DoubanMongoPipeline":300}
| MONGODB_HOST = "127.0.0.1"
| MONGODB_PORT = "27017"
| MONGODB_NAME = "Douban"
| MONGODB_SHEETNAME = "doubanmovies"
| User-Agent = "Mozilla/5.0...." #设置默认的User-Agent
| #ROBOTTXT_OBEY = TRUE #禁用机器人协议
| #-----此部分启用中间件组件和设置中间件相关参数------------------------------------------------------------------
| DOWNLOADER_MIDDLEWARES = { "douban.middlewares.Random_User_Agent":100,
| "douban.middlewares.Random_Proxy":200, # 注册两个下载中间件,Random_User_Agent的优先级更高,即先设置User-Agent,后进行代理访问(按访问流程设置优先级)
| }
|
| USER_AGENT = [ "Mozilla/5.0........", # 手动写入的随机User-Agent列表
| "Mozilla/5.0........",
| "Mozilla/5.0........"
| ]
|
| PROXIES = [ {"ip_port":"200.200.200.201:8080","user_passwd":"mr_mao_hacker:sffqry9r"},
| {"ip_port":"200.200.200.202:8008","user_passwd":""}, # 手动写入的随机代理列表(包括私密代理和公开代理)----公开代理"user_passwd"的值为空
| {"ip_port":"200.200.200.203:8088","user_passwd":""},
| ]
| DOWNLOAD_DELAY = 3 # 设置下载延迟3秒
| COOKIES_ENABLED = False # 除非特殊需要,禁用cookie,防止网站根据cookie封锁爬虫
|
|
| 6.执行爬虫--- scrapy crawl db (同上)
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
12.scrapy-project: sina (scrapy框架新浪分类资讯整站爬取----普通的scrapy.Spider)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 1.创建项目--- scrapy startproject sina
| sina/
| ├── scrapy.cfg
| └── sina
| ├── __init__.py
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| └── spiders
| ├── __init__.py
| └── xinlang.py
|
| 2.明确目标--- vim items.py
| vim items.py
| import scrapy
|
| class SinaItem(scrapy.Item):
| parent_title = scrap.Field() # 大类标题
| parent_url = scrap.Field() # 大类URL
| sub_title = scrap.Field() # 小类标题
| sub_url = scrap.Field() # 小类URL
| sub_filename = scrapy.Field() # 小类目录存储路径
| son_url = scrap.Field() # 小类的子链接
| article_title = scrap.Field() # 文章标题
| article_content = scrap.Field() # 文章内容
|
| 3.制作爬虫
| (1)生成爬虫--- scrapy genspider xinlang "sina.com.cn"
| (2)设置爬虫--- vim xinlang.py
| vim xinlang.py
| import scrapy
| import os
| from sina.items import SinaItem
| import sys
|
| reload(sys)
| sys.setdefaultencoding('utf-8')
|
| class XinlangSpider(scrapy.Spider):
| name = 'xinlang'
| allowed_domains = ['sina.com.cn']
| start_urls = ['http://news.sina.com.cn/guide/']
|
| def parse(self,response):
| items = []
| parent_title_list = response.xpath('//div[@id=\"tab01\"]/div/h3/a/text()').extract()
| parent_url_list = response.xpath('//div[@id=\"tab01\"]/div/h3/a/@href').extract()
| sub_title_list = response.xpath('//div[@id=\"tab01\"]/div/ul/li/a/text()').extract()
| sub_url_list = response.xpath('//div[@id=\"tab01\"]/div/ul/li/a/@href').extract()
|
| for i in range(0,len(parent_title_list)): # 创建大类的存放目录(若存在则不创建,若不存在则重新创建)
| parent_filename = "./Data/" + parent_title_list[i]
| if(not os.path.exists(parent_filename)):
| os.makedirs(parent_filename)
|
| for j in range(0,len(sub_url_list)): # 实例化SinaItem()并保存大类的URL和标题
| item = SinaItem()
| item['parent_title'] = parent_title_list[i]
| item['parent_url'] = parent_url_list[i]
| if_belong = sub_url_list[i].startwith(item['parent_url']) # 判断小类URL是否以大类URL开头(即判断小类是否属于大类)
| if (if_belong): # 如果属于该大类,则判断小类存放目录是否存在,不存在则新建该目录
| sub_filename = parent_filename + "/" + sub_title_list[j]
| if (not os.path.exists(sub_filename)):
| os.makedirs(sub_filename)
| item['sub_title'] = sub_title_list[j] # 保存小类的标题/URL/存放目录,并将目前所获取item信息追加到items列表中保存
| item['sub_url'] = sub_url_list[j]
| item['sub_filename'] = sub_filename
| items.append(item)
| for item in items: # 逐一取出子类的url,并附带上meta信息(即item),将其加入请求队列,使用second_parse()函数处理其返回的响应
| yield scrapy.Request(url=item['sub_url'],meta={'meta_1':item},callback=self.second_parse)
|
| def second_parse(self,response):
| meta_1 = response.meta['meta_1'] # 将meta对应的item信息赋值给meta_1(即,meta_1 = item)
| son_url_list = response.xpath('//a/@href').extract() # 匹配获取返回的孙类的URL列表
| items = []
| for i in range(0,len(son_url_list)): # 循环取出孙类URL判断其是否属于某个大类(以大类的URL开头)和是否是文章(以.shml结尾),如果属于则将该孙类URL保存起来
| if_belong = son_url_list[i].endwith('.shtml') and sub_url_list[i].startwith(meta_1['parent_url'])
| if (if_belong):
| item = SinaItem()
| item['parent_title'] = meta_1['parent_title']
| item['parent_url'] = meta_1['parent_url']
| item['sub_title'] = meta_1['sub_title']
| item['sub_url'] = meta_1['sub_url']
| item['sub_filename'] = meta_1['sub_filename']
| item['son_url'] = son_url_list[i]
| items.append(item)
| for item in items: # 逐一取出孙类的url,并附带上meta信息(即第二次的item),将其加入请求队列,使用third_parse()函数处理其返回的响应
| yield scrapy.Request(url=item['son_url'],meta={'meta_2':item},callback=self.third_parse)
|
| def third_parse(self,response):
| item = response.meta['meta_2'] # 将meta对应的(第二次获取更新的item信息)赋值给这里的item(即,item = item)
| article_content = "" # 从孙类URL返回响应中匹配出文章标题和文章内容并保存进item
| article_title_list = response.xpath('//hi[@id=\"main_title\"]/text()').extract()
| article_content_list = response.xpath('//div[@id=\"artibody\"]/p/text()').extract()
| for content_part in article_content_list:
| article_content += content_part # 通过循环拼接成完整的文章内容
| item['article_title'] = article_title_list[0]
| item['article_content'] = article_content
| yield item # 将数据收集完整的item传递给pipeline处理
|
| 4.编写item pipelines--- vim pipelines.py
| vim pipelines.py
| import sys
| reload(sys)
| sys.setdefaultencoding('utf-8')
|
| class SinaSavePipeline(object):
| def process_item(self,item,spider):
| son_url = item['son_url']
| filename = societyguester[7:-6].replace('/','_') # 取孙类URL作为文件名,但将其中的'/'替换为'_',且加上'.txt'后缀
| filename = filename + ".txt"
|
| f = open(item['sub_filename'] + '/' + filename,'w') # 将文章内容保存在子类目录下
| f.write(item['article_content'])
| f.close()
| return item
|
| 5.启用上述pipeline组件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"sina.pipelines.SinaSavePipeline":300}
|
| 6.执行爬虫--- scrapy crawl xinlang
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------