Python爬虫分享【2】

所有代码:https://github.com/tony5225/wuba

目标:对以下各个类目商品信息进行爬取,放入mongodb中

基本思路:

  • 很明显此问题需要分三步进行
  • 首先需要爬去此页的全部类别链接
  • 需要进入各个类目商品,获取此类目下各商品链接
  • 最后进入每个商品网页,爬取目标信息,并存入数据库里

获取类别链接:

  • 防止被反爬取,采取代理方式,每次爬取前随机选择代理
# -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests
import random
headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.82 Safari/537.36',
'Connection':'keep-alive'}
proxy_list=['http://190.147.220.37:8080','http://194.79.146.210:8080','http://213.6.3.35:8080','http://223.27.170.219:10000','http://31.208.7.22:8888','http://136.243.122.90:3128']
'''proxy_list = [
 'http://117.177.250.151:8081',
 'http://111.85.219.250:3129',
 'http://122.70.183.138:8118',
 ]'''
start_url='http://bj.ganji.com/wu/'
host_url='http://bj.ganji.com'
urls=[]
'''proxy_ip=random.choice(proxy_list)
proxies = {'http': proxy_ip}'''
def get_links(url):
    wb_data=requests.get(url,headers=headers)
    soup=BeautifulSoup(wb_data.text,'lxml')
    links1=soup.select('dt > a')
    links2=soup.select('dd > a')
    links=links1+links2
    for link in links:
        page_url=host_url+link.get('href')
        print page_url
        urls.append(page_url)
        return urls
linkurl=get_links(start_url)
  

获取商品链接及商品信息

由于与(1)类似,我又不重复了,想看的戳这里:
https://github.com/tony5225/wuba/blob/master/get_parsing.py

主函数

  • 为了加快爬取速度,我们采取多进程爬取
  • 为了防止数据库中存入重复的数据,我们采用集合的方式,方便中断后继续采取
# -*- coding: utf-8 -*-
from multiprocessing import Pool
from get_parsing import get_item_info,get_links_from,url_list,item_info
db_urls=[item['url'] for item in url_list.find()]
index_urls = [item['url'] for item in item_info.find()]
x = set(db_urls)
y = set(index_urls)
rest_of_urls = x-y
if __name__ == '__main__':
    pool = Pool()
    # pool = Pool()
    pool.map(get_item_info,rest_of_urls)
    pool.close()
    pool.join()

查询mongodb中存的数据的数量:

# -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests
from get_parsing import url_list
import pymongo
conn = pymongo.MongoClient(host='127.0.0.1',port=27017)
db = conn.ganji
#account = db
print db.collection_names()
print db.url_listganji.count()
print db.item_infoganji.count()

你可能感兴趣的:(Python爬虫分享【2】)