中国土地市场网 landChina爬虫(代理分布式版本)

main01

这个程序通过selenium采用ChromeDrive的无头浏览器方式,从 http://www.landchina.com/default.aspx?tabid=263&ComName=default 获取交易信息的url,并且存入redis中。

对于交易信息,可以加时间与地区做筛选:

def llq_main(start, end):
    print(start, end)
    time.sleep(2)
    # 对时间条件进行赋值
    driver.find_element_by_id('TAB_queryDateItem_270_1').clear()
    driver.find_element_by_id('TAB_queryDateItem_270_1').send_keys(start)
    driver.find_element_by_id('TAB_queryDateItem_270_2').clear()
    driver.find_element_by_id('TAB_queryDateItem_270_2').send_keys(end)
    # 进行行政区的选择
    driver.find_element_by_id('TAB_QueryConditionItem256').click()
    driver.execute_script("document.getElementById('TAB_queryTblEnumItem_256_v').setAttribute('type', 'text');")
    driver.find_element_by_id('TAB_queryTblEnumItem_256_v').clear()
    driver.find_element_by_id('TAB_queryTblEnumItem_256_v').send_keys('3205')  # 3701是济南; 37是山东
    driver.find_element_by_id('TAB_QueryButtonControl').click()  # 查询操作
    page_zh(i, l)

if __name__ == '__main__':
    llq_main('2005-01-01', '2006-7-30')

比如这里的3205就是地区码,最后一行的两个时间就表示起止时间。

设置代理很简单,我直接用了以前买的ssr,占用的是localhost的1080端口

proxy = '127.0.0.1:1080'
proxies = {
    'http': 'socks5://' + proxy,
    'https': 'socks5://' + proxy
}

对于selenium用代理,需要如下设置:

options.add_argument('--proxy-server=http://' + proxy)

完整代码如下:

# coding=utf-8
import time
import re
import redis
from bs4 import BeautifulSoup
from selenium import webdriver

proxy = '127.0.0.1:1080'
proxies = {
    'http': 'socks5://' + proxy,
    'https': 'socks5://' + proxy
}

r = redis.Redis(host='127.0.0.1', port=6379, db=0)  # host自己的ip地址
options = webdriver.ChromeOptions()
options.add_argument('--proxy-server=http://' + proxy)
options.set_headless()
driver = webdriver.Chrome(chrome_options=options)  # 打开chrome_headless浏览器
driver.get('http://www.landchina.com/default.aspx?tabid=263&ComName=default')  # 打开界面
i = 1
l = 0
date_list = []
time.sleep(8)
driver.find_element_by_id('TAB_QueryConditionItem270').click()



def page_zh(i, l):
    # 获取本时间段内的总页数(方法)int(reg[0])
    zys = driver.find_elements_by_css_selector(".pager")
    if (zys != []):
        str = zys[1].text;
        reg = re.findall(r'\d+', str)
        pages = int(reg[0])
        print("总页数为:" + reg[0])
        tds = driver.find_elements_by_css_selector(".pager>input")
        # 清空文本方法
        tds[0].clear()
        tds[0].send_keys(i)
        print("第" + tds[0].get_attribute("value") + "页")
        tds[1].click()
    elif (zys == []):
        pages = 1

    time.sleep(4)
    # 获取页面html
    html = driver.find_element_by_id('TAB_contentTable').get_attribute('innerHTML')
    soup = BeautifulSoup(html, 'lxml')  # 对html进行解析
    href_ = soup.select('.queryCellBordy a')
    for line in href_:
        print("http://www.landchina.com/" + line['href'])
        link = "http://www.landchina.com/" + line['href']
        # 链接redis
        r.sadd('mylist', "http://www.landchina.com/" + line['href'])

    if (i < pages):
        i = i + 1
        page_zh(i, l)

    else:
        print("本次采集结束!!!")


# 关闭浏览器(selenium)
# driver.quit()

def llq_main(start, end):
    print(start, end)
    time.sleep(2)
    # 对时间条件进行赋值
    driver.find_element_by_id('TAB_queryDateItem_270_1').clear()
    driver.find_element_by_id('TAB_queryDateItem_270_1').send_keys(start)
    driver.find_element_by_id('TAB_queryDateItem_270_2').clear()
    driver.find_element_by_id('TAB_queryDateItem_270_2').send_keys(end)
    # 进行行政区的选择
    driver.find_element_by_id('TAB_QueryConditionItem256').click()
    driver.execute_script("document.getElementById('TAB_queryTblEnumItem_256_v').setAttribute('type', 'text');")
    driver.find_element_by_id('TAB_queryTblEnumItem_256_v').clear()
    driver.find_element_by_id('TAB_queryTblEnumItem_256_v').send_keys('3205')  # 3701是济南; 37是山东
    driver.find_element_by_id('TAB_QueryButtonControl').click()  # 查询操作
    page_zh(i, l)


if __name__ == '__main__':
    llq_main('2005-01-01', '2006-7-30')

mian02

该程序的思路很简单:
0.selenium通过代理ip地址打开一个目标网站的界面,构造出一个可用的cookie:

def getCookie():
    options = webdriver.ChromeOptions()
    options.add_argument('--proxy-server=http://' + proxy)
    options.set_headless()
    driver = webdriver.Chrome(chrome_options=options)  # 打开chrome_headless浏览器
    driver.get('http://www.landchina.com/default.aspx?tabid=263&ComName=default')  # 打开界面
    time.sleep(5)
    cookie = driver.get_cookies()
    str1 = list(cookie)
    cookieStr = ''
    for i in range(0, 6):
        cookieStr = strnew + str1[i]['name'] + '=' + str1[i]['value'] + ';'
    print(cookieStr)
    driver.quit()
    return cookieStr

1.从redis取一个url:
通过r.spop()函数随机取一个url,调用parse()函数进行页面解析

def checkRedis(sleepCounter, headers):  # 从redis读url
    while 1:
        if r.scard('mylist') != 0:
            url = r.spop('mylist')
            # print(url)
            time.sleep(2)
            parse(url, headers)
        elif sleepCounter < 100:
            print('waiting...' + str(sleepCounter))
            sleepCounter += 1
            time.sleep(1)
        else:
            print('quit')
            break

2.通过代理ip地址,用最开始构造的cookie构造一个适用的headers;发出请求,完成页面解析:

def createHeaders(cookie):
    headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
        'Accept-Encoding': 'gzip, deflate',
        'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7',
        'Cache-Control': 'max-age=0',
        'Connection': 'keep-alive',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Host': 'www.landchina.com',
        'Origin': 'http://www.landchina.com',
        'Upgrade-Insecure-Requests': '1',
        'Cookie': cookie,
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'
    }
    return headers

具体的页面解析我就没有放出来了,有兴趣的自己写写

def parse(url, headers):
    page = requests.get(url, headers=headers, proxies=proxies)
    doc = pq(page.text)

最后介绍一个我觉得很有用的包,叫做retry。顾名思义,用于方法出错后重试的包,对方法进行修饰。
我这里把整个爬虫的过程写在一个方法中,并加上了retry的修饰(出错了等待2秒后就重试,最多重试5次):

@retry(tries=5, delay=2)
def doTheJob():
    cookie = getCookie()
    headers = createHeaders(cookie)
    checkRedis(0, headers=headers)

doTheJob()

最后完整代码如下:

# coding=utf-8
import requests
from pyquery import PyQuery as pq
from redis import StrictRedis
from selenium import webdriver
import time
from pymongo import MongoClient
from retry import retry

r = StrictRedis(host='127.0.0.1', port=6379, db=0)
client = MongoClient()
db = client['landchina_qd']
collection = db['landchina_qd']

proxy = '127.0.0.1:1080'
proxies = {
    'http': 'socks5://' + proxy,
    'https': 'socks5://' + proxy
}


def saveToMongo(data):
    if collection.insert(data):
        print('Saved to mongo.')


def checkRedis(sleepCounter, headers):  # 从redis读,并解析页面
                                        # 如果redis中暂无数据,等待。等待时间超过100秒后退出程序。
    while 1:
        if r.scard('mylist') != 0:
            url = r.spop('mylist')
            # print(url)
            time.sleep(2)
            parse(url, headers)
        elif sleepCounter < 100:
            print('waiting...' + str(sleepCounter))
            sleepCounter += 1
            time.sleep(1)
        else:
            print('quit')
            break


def getCookie():
    options = webdriver.ChromeOptions()
    options.add_argument('--proxy-server=http://' + proxy)
    options.set_headless()
    driver = webdriver.Chrome(chrome_options=options)  # 打开chrome_headless浏览器
    driver.get('http://www.landchina.com/default.aspx?tabid=263&ComName=default')  # 打开界面
    time.sleep(5)
    cookie = driver.get_cookies()
    str1 = list(cookie)
    strnew = ''
    for i in range(0, 6):
        strnew = strnew + str1[i]['name'] + '=' + str1[i]['value'] + ';'

    print(strnew)
    driver.quit()
    return strnew


def createHeaders(cookie):
    headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
        'Accept-Encoding': 'gzip, deflate',
        'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7',
        'Cache-Control': 'max-age=0',
        'Connection': 'keep-alive',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Host': 'www.landchina.com',
        'Origin': 'http://www.landchina.com',
        'Upgrade-Insecure-Requests': '1',
        'Cookie': cookie,
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'
    }
    return headers


def parse(url, headers):
    page = requests.get(url, headers=headers, proxies=proxies)
    doc = pq(page.text)
      #页面解析部分感兴趣可以自己写
    data = [{
        'district': district,
        'name': name,
        'location': location,
        'size': size,
        'usage': usage,
        'price': price,
        'time': time,
        'url': url
    }]
    saveToMongo(data)


@retry(tries=5, delay=2)
def doTheJob():
    cookie = getCookie()
    headers = createHeaders(cookie)
    checkRedis(0, headers=headers)


doTheJob()

最后程序运行结果很稳定,速度快的飞起

跑的时候开了20个程序一起跑,能做到一分钟爬30条左右。
main01

该程序获取交易信息的url,存入redis
main02

可以看到这里的main02 和 4 ,5 三个程序运行正常,他们都是跑的同一个程序。


分布式

勉强算个分布式爬虫吧,毕竟只有一台电脑。。。

你可能感兴趣的:(中国土地市场网 landChina爬虫(代理分布式版本))