vert.x笔记:6.vert.x集群化部署

vert.x支持集群化部署,默认封装使用的是一个叫Hazelcast的框架,从官方github上看到的开发进度表示,3.1可能会引入比较大众点的zookeeper作为集群的协作框架。

demo工程还是使用第5章中的dubbo服务demo代码

修改启动类:

package com.heartlifes.vertx.demo.dubbo;

import io.vertx.core.AsyncResult;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.core.spi.cluster.ClusterManager;
import io.vertx.spi.cluster.hazelcast.HazelcastClusterManager;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import com.hazelcast.config.Config;
import com.hazelcast.config.GroupConfig;

public class SpringMain {

    private static Vertx vertx = Vertx.vertx();// 集群初始化失败的情况下,可以使用默认vertx实例
    private static ApplicationContext ctx = null;

    public static void main(String[] args) {
        // 配置文件方式
        ctx = new ClassPathXmlApplicationContext("dubbo-consumer.xml");
        // Hazelcast配置类
        Config cfg = new Config();
        // 加入组的配置,防止广播环境下,负载串到别的开发机中
        GroupConfig group = new GroupConfig();
        group.setName("p-dev");
        group.setPassword("p-dev");
        cfg.setGroupConfig(group);
        // 申明集群管理器
        ClusterManager mgr = new HazelcastClusterManager(cfg);
        VertxOptions options = new VertxOptions().setClusterManager(mgr);
        // 集群化vertx
        Vertx.clusteredVertx(options, SpringMain::resultHandler);

    }

    private static void resultHandler(AsyncResult res) {
        // 如果成功,使用集群化的vertx实例
        if (res.succeeded()) {
            vertx = res.result();
            // 这里要注意,一定要在异步回调中,获取了vertx实例后,再去部署模块
            // 由于vert.x所有内部逻辑都是异步调用的,所以,如果你在异步回调前就去部署模块,最终会导致集群失败
            deploy(vertx);
        } else {
            System.out.println("cluster failed, using default vertx");
            deploy(vertx);
        }
    }

    private static void deploy(Vertx vertx) {
        vertx.deployVerticle(new SpringVerticle(ctx));
        vertx.deployVerticle(new ServerVerticle());
    }

}

启动多个主程序,会发现后台输出类似如下的日志信息

八月 04, 2015 2:05:09 下午 com.hazelcast.instance.DefaultAddressPicker
信息: [LOCAL] [p-dev] [3.5] Prefer IPv4 stack is true.
八月 04, 2015 2:05:09 下午 com.hazelcast.instance.DefaultAddressPicker
信息: [LOCAL] [p-dev] [3.5] Picked Address[192.168.1.119]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
八月 04, 2015 2:05:09 下午 com.hazelcast.spi.OperationService
信息: [192.168.1.119]:5701 [p-dev] [3.5] Backpressure is disabled
八月 04, 2015 2:05:09 下午 com.hazelcast.spi.impl.operationexecutor.classic.ClassicOperationExecutor
信息: [192.168.1.119]:5701 [p-dev] [3.5] Starting with 2 generic operation threads and 4 partition operation threads.
八月 04, 2015 2:05:10 下午 com.hazelcast.system
信息: [192.168.1.119]:5701 [p-dev] [3.5] Hazelcast 3.5 (20150617 - 4270dc6) starting at Address[192.168.1.119]:5701
八月 04, 2015 2:05:10 下午 com.hazelcast.system
信息: [192.168.1.119]:5701 [p-dev] [3.5] Copyright (c) 2008-2015, Hazelcast, Inc. All Rights Reserved.
八月 04, 2015 2:05:10 下午 com.hazelcast.instance.Node
信息: [192.168.1.119]:5701 [p-dev] [3.5] Creating MulticastJoiner
八月 04, 2015 2:05:10 下午 com.hazelcast.core.LifecycleService
信息: [192.168.1.119]:5701 [p-dev] [3.5] Address[192.168.1.119]:5701 is STARTING
八月 04, 2015 2:05:14 下午 com.hazelcast.cluster.impl.MulticastJoiner
信息: [192.168.1.119]:5701 [p-dev] [3.5] 


Members [1] {
    Member [192.168.1.119]:5701 this
}

八月 04, 2015 2:05:14 下午 com.hazelcast.core.LifecycleService
信息: [192.168.1.119]:5701 [p-dev] [3.5] Address[192.168.1.119]:5701 is STARTED
八月 04, 2015 2:05:15 下午 com.hazelcast.partition.InternalPartitionService
信息: [192.168.1.119]:5701 [p-dev] [3.5] Initializing cluster partition table first arrangement...
八月 04, 2015 2:05:22 下午 com.hazelcast.nio.tcp.SocketAcceptor
信息: [192.168.1.119]:5701 [p-dev] [3.5] Accepting socket connection from /192.168.1.119:59906
八月 04, 2015 2:05:22 下午 com.hazelcast.nio.tcp.TcpIpConnectionManager
信息: [192.168.1.119]:5701 [p-dev] [3.5] Established socket connection between /192.168.1.119:5701
八月 04, 2015 2:05:28 下午 com.hazelcast.cluster.ClusterService
信息: [192.168.1.119]:5701 [p-dev] [3.5] 

Members [2] {
    Member [192.168.1.119]:5701 this
    Member [192.168.1.119]:5702
}

八月 04, 2015 2:05:29 下午 com.hazelcast.partition.InternalPartitionService
信息: [192.168.1.119]:5701 [p-dev] [3.5] Re-partitioning cluster data... Migration queue size: 135
八月 04, 2015 2:05:30 下午 com.hazelcast.partition.InternalPartitionService
信息: [192.168.1.119]:5701 [p-dev] [3.5] All migration tasks have been completed, queues are empty.

你可能感兴趣的:(vert.x)