【Consul】CONSUL环境部署


Consul是一个支持多数据中心分布式高可用的服务发现配置共享的服务软件,由HashiCorp公司用Go语言开发,基于Mozilla Public License 2.0的协议进行开源。Consul支持健康检查,并允许HTTP和DNS协议调用 API 存储键值对

 

1   如何获取

目前最新版本是V0.6.4。

源码地址:

https://github.com/hashicorp/consul

可执行文件地址:

https://www.consul.io/downloads.html

MailList:

https://groups.google.com/group/consul-tool/

官方地址:

https://www.consul.io/

官方演示:

http://demo.consul.io/ui/

 

2   搭建CONSUL环境

2.1 网络规划

主机名称

IP

角色

数据中心

node0

192.168.192.120

Server

DataCenter1

node1

192.168.192.121

Server

DataCenter1

node2

192.168.192.122

Server

DataCenter1

node3

192.168.192.123

Client

DataCenter1

 

2.2 软件环境

CentOS:7.2.1511

Consul:V0.6.4

 

2.3 构建

1. 安装可执行文件consul

   从上述所属地址中下载(linux类别X64),然后执行

[ceph@node0 cousul]$sudo consul /usr/bin/

[ceph@node0 ~]# chmod755 consul

查看是否安装成功

[ceph@node0 cousul]$consul version

Consul v0.6.4

Consul Protocol: 3 (Understands back to: 1)

[ceph@node0 cousul]$

 

查看帮助

[ceph@node0 consul]$consul --help

usage: consul[--version] [--help] []

 

Available commandsare:

    agent          Runs a Consul agent

    configtest     Validate config file

    event          Fire a new event

    exec           Executes a command on Consul nodes

    force-leave    Forces a member of the cluster to enter the"left" state

    info           Provides debugging information foroperators

    join           Tell Consul agent to join cluster

    keygen         Generates a new encryption key

    keyring        Manages gossip layer encryption keys

    leave          Gracefully leaves the Consul clusterand shuts down

    lock           Execute a command holding a lock

    maint          Controls node or service maintenancemode

    members        Lists the members of a Consul cluster

    monitor        Stream logs from a Consul agent

    reload         Triggers the agent to reloadconfiguration files

    rtt            Estimates network round trip timebetween nodes

    version        Prints the Consul version

    watch          Watch for changes in Consul

 

Consul有两种方式搭建方式:一是bootstrap模式,二是非bootstrap模式。

 

2.3.1  bootstrap模式

 

1. 在启动agent

在第一台节点上启动agent,以server模式运行,指定server角色的节点数目,指定节点名(在datacenter内节点名唯一),同时提供监听地址。

命令如下:

[ceph@node0 cousul]$ consul agent -server-bootstrap-expect=3 -data-dir=/tmp/consul -node=node0 -bind=192.168.192.120-dc=dc1

==> WARNING: Expect Mode enabled,expecting 3 servers

==> Starting Consul agent...

==> Starting Consul agent RPC...

==> Consul agent running!

         Node name: 'node0'

        Datacenter: 'dc1'

            Server: true (bootstrap: false)

       Client Addr: 127.0.0.1 (HTTP: 8500,HTTPS: -1, DNS: 8600, RPC: 8400)

     Cluster Addr: 192.168.192.120 (LAN: 8301, WAN: 8302)

   Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false

             Atlas:

 

==> Log data will now stream inas it occurs:

 

   2016/07/05 20:52:06 [INFO] serf: EventMemberJoin: node0 192.168.192.120

   2016/07/05 20:52:06 [INFO] serf: EventMemberJoin: node0.dc1192.168.192.120

   2016/07/05 20:52:06 [INFO] raft: Node at 192.168.192.120:8300 [Follower]entering Follower state

   2016/07/05 20:52:06 [INFO] consul: adding LAN server node0 (Addr:192.168.192.120:8300) (DC: dc1)

   2016/07/05 20:52:06 [INFO] consul: adding WAN server node0.dc1 (Addr:192.168.192.120:8300) (DC: dc1)

   2016/07/05 20:52:06 [ERR] agent: failed to sync remote state: No clusterleader

   2016/07/05 20:52:08 [WARN] raft: EnableSingleNode disabled, and no knownpeers. Aborting election.

之所以失败,原因在于当前的Datacenter没有 leader server

依次在另外两台机器部署agent作为server

节点node1

[ceph@node1 consul]$ consul agent-server -bootstrap-expect=3 -data-dir=/tmp/consul -node=node1-bind=192.168.192.121 -dc=dc1

节点node2

[ceph@node2 consul]$ consul agent-server -bootstrap-expect=3 -data-dir=/tmp/consul -node=node2-bind=192.168.192.122 -dc=dc1

 

目前,三个节点均不知道其他Server节点的存在,以node0为例

[ceph@node0 consul]$ consul members

Node   Address               Status  Type   Build  Protocol  DC

node0  192.168.192.120:8301  alive  server  0.6.4  2        dc1

[ceph@node0 consul]$

查看consul集群信息

[ceph@node0 consul]$ consul info

agent:

    check_monitors= 0

    check_ttls= 0

    checks= 0

    services= 1

build:

    prerelease=

    revision= 26a0ef8c

    version= 0.6.4

consul:

    bootstrap = false

    known_datacenters= 1

    leader = false

    server = true

……

当前节点为follow节点。

2. 触发选举leader

   因为consul一般需要3~5个Server,因此,在节点node0上添加node1和node2。

[ceph@node0 consul]$ consul join192.168.192.121

Successfully joined cluster bycontacting 1 nodes.

[ceph@node0 consul]$ consul join192.168.192.122

Successfully joined cluster bycontacting 1 nodes.

[ceph@node0 consul]$  

观察三个节点consul日志:

Node0:

   2016/07/05 21:10:55 [INFO] agent: (LAN) joining: [192.168.192.122]

   2016/07/05 21:10:55 [INFO] serf: EventMemberJoin: node2 192.168.192.122

   2016/07/05 21:10:55 [INFO] agent: (LAN) joined: 1 Err:

   2016/07/05 21:10:55 [INFO] consul: adding LAN server node2 (Addr:192.168.192.122:8300) (DC: dc1)

   2016/07/05 21:10:55 [INFO] consul: Attempting bootstrap with nodes:[192.168.192.120:8300 192.168.192.121:8300 192.168.192.122:8300]

   2016/07/05 21:10:55 [INFO] consul: New leader elected: node2

   2016/07/05 21:10:56 [INFO] agent: Synced service 'consul'

Node1

  2016/07/05 21:10:55 [INFO] serf:EventMemberJoin: node2 192.168.192.122

   2016/07/05 21:10:55 [INFO] consul: adding LAN server node2 (Addr:192.168.192.122:8300) (DC: dc1)

   2016/07/05 21:10:55 [INFO] consul: Attempting bootstrap with nodes:[192.168.192.121:8300 192.168.192.120:8300 192.168.192.122:8300]

   2016/07/05 21:10:56 [INFO] consul: New leader elected: node2

2016/07/05 21:10:57 [INFO] agent:Synced service 'consul'

Node2

   2016/07/05 21:10:55 [INFO] serf: EventMemberJoin: node0 192.168.192.120

   2016/07/05 21:10:55 [INFO] serf: EventMemberJoin: node1 192.168.192.121

   2016/07/05 21:10:55 [INFO] consul: adding LAN server node0 (Addr:192.168.192.120:8300) (DC: dc1)

   2016/07/05 21:10:55 [INFO]consul: Attempting bootstrap with nodes: [192.168.192.122:8300192.168.192.120:8300 192.168.192.121:8300]

   2016/07/05 21:10:55 [INFO] consul: adding LAN server node1 (Addr:192.168.192.121:8300) (DC: dc1)

   2016/07/05 21:10:55 [WARN] raft: Heartbeat timeout reached, startingelection

   2016/07/05 21:10:55 [INFO] raft: Node at 192.168.192.122:8300[Candidate] entering Candidate state

   2016/07/05 21:10:55 [INFO] raft: Election won. Tally: 2

   2016/07/05 21:10:55 [INFO] raft: Node at 192.168.192.122:8300 [Leader]entering Leader state

   2016/07/05 21:10:55 [INFO] consul: cluster leadership acquired

   2016/07/05 21:10:55 [INFO] consul: New leader elected: node2

   2016/07/05 21:10:55 [INFO] raft: pipelining replication to peer192.168.192.121:8300

   2016/07/05 21:10:55 [INFO] raft: pipelining replication to peer192.168.192.120:8300

   2016/07/05 21:10:55 [INFO] consul: member 'node2' joined, marking healthalive

   2016/07/05 21:10:55 [INFO] consul: member 'node0' joined, marking healthalive

   2016/07/05 21:10:55 [INFO] consul: member 'node1' joined, marking healthalive

   2016/07/05 21:10:58 [INFO] agent: Synced service 'consul'

由日志可知,举出了leadernode2

在node0查看members

[ceph@node0 consul]$ consul members

Node   Address               Status  Type   Build  Protocol  DC

node0  192.168.192.120:8301  alive  server  0.6.4  2        dc1

node1  192.168.192.121:8301  alive  server  0.6.4  2        dc1

node2  192.168.192.122:8301  alive  server  0.6.4  2         dc1

[ceph@node0 consul]$

查看info信息

[ceph@node0 consul]$ consul info

agent:

    check_monitors= 0

    check_ttls= 0

    checks= 0

    services= 1

build:

    prerelease=

    revision= 26a0ef8c

    version= 0.6.4

consul:

    bootstrap = false

    known_datacenters= 1

    leader = false

    server = true

……

节点node2上查看consul信息

[ceph@node2 consul]$ consul info

agent:

    check_monitors= 0

    check_ttls= 0

    checks= 0

    services= 1

build:

    prerelease=

    revision= 26a0ef8c

    version= 0.6.4

consul:

    bootstrap= false

    known_datacenters= 1

    leader = true

    server = true

 

3. 在node3上以client启动agent

[ceph@node3 consul]$ consul agent-data-dir=/tmp/consul -node=node3 -bind=192.168.192.123 -dc=dc1

==> Starting Consul agent...

==> Starting Consul agent RPC...

==> Consul agent running!

         Node name: 'node3'

        Datacenter: 'dc1'

            Server: false (bootstrap: false)

       Client Addr: 127.0.0.1 (HTTP: 8500,HTTPS: -1, DNS: 8600, RPC: 8400)

     Cluster Addr: 192.168.192.123 (LAN: 8301, WAN: 8302)

   Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false

             Atlas:

 

==> Log data will now stream inas it occurs:

 

   2016/07/05 21:21:02 [INFO] serf: EventMemberJoin: node3 192.168.192.123

   2016/07/05 21:21:02 [ERR] agent: failed to sync remote state: No knownConsul servers

 

在节点node0上添加node3

[ceph@node0 consul]$ consul join192.168.192.123

Successfully joined cluster bycontacting 1 nodes.

[ceph@node0 consul]$ consul members

Node   Address               Status  Type   Build  Protocol  DC

node0  192.168.192.120:8301  alive   server  0.6.4 2         dc1

node1  192.168.192.121:8301  alive  server  0.6.4  2        dc1

node2  192.168.192.122:8301  alive   server  0.6.4 2         dc1

node3  192.168.192.123:8301  alive  client  0.6.4 2         dc1

[ceph@node0 consul]$

节点node3的日志如下:

   2016/07/05 21:21:57 [INFO] serf: EventMemberJoin: node0 192.168.192.120

   2016/07/05 21:21:57 [INFO] serf: EventMemberJoin: node2 192.168.192.122

   2016/07/05 21:21:57 [INFO] serf: EventMemberJoin: node1 192.168.192.121

   2016/07/05 21:21:57 [INFO] consul: adding server node0 (Addr:192.168.192.120:8300) (DC: dc1)

   2016/07/05 21:21:57 [INFO] consul: adding server node2 (Addr:192.168.192.122:8300) (DC: dc1)

   2016/07/05 21:21:57 [INFO] consul: adding server node1 (Addr:192.168.192.121:8300) (DC: dc1)

   2016/07/05 21:21:57 [INFO] consul: New leader elected: node2

   2016/07/05 21:21:57 [INFO] agent: Synced node info

 

3. 依次关闭node3 node2:

Node0和node1的日志如下:

Node0

   2016/07/05 21:24:00 [INFO] serf: EventMemberLeave: node2 192.168.192.122

   2016/07/05 21:24:00 [INFO] consul: removing LAN server node2 (Addr:192.168.192.122:8300) (DC: dc1)

   2016/07/05 21:24:00 [WARN] raft: Heartbeat timeout reached, startingelection

   2016/07/05 21:24:00 [INFO] raft: Node at 192.168.192.120:8300[Candidate] entering Candidate state

   2016/07/05 21:24:01 [INFO] raft: Duplicate RequestVote for same term: 2

   2016/07/05 21:24:02 [WARN] raft: Election timeout reached, restartingelection

   2016/07/05 21:24:02 [INFO] raft: Node at 192.168.192.120:8300[Candidate] entering Candidate state

   2016/07/05 21:24:02 [INFO] raft: Election won. Tally: 2

   2016/07/05 21:24:02 [INFO] raft: Node at 192.168.192.120:8300 [Leader]entering Leader state

   2016/07/05 21:24:02 [INFO] consul: cluster leadership acquired

   2016/07/05 21:24:02 [INFO] consul: New leader elected: node0

   2016/07/05 21:24:02 [INFO] raft: pipelining replication to peer192.168.192.121:8300

   2016/07/05 21:24:02 [INFO] consul: member 'node2' left, deregistering

   2016/07/05 21:24:03 [INFO] agent.rpc: Accepted client: 127.0.0.1:35701

Node1

   2016/07/05 21:24:00 [INFO] consul: removing LAN server node2 (Addr:192.168.192.122:8300) (DC: dc1)

   2016/07/05 21:24:00 [WARN] raft: Rejecting vote request from192.168.192.120:8300 since we have a leader: 192.168.192.122:8300

   2016/07/05 21:24:01 [WARN] raft: Heartbeat timeout reached, startingelection

   2016/07/05 21:24:01 [INFO] raft: Node at 192.168.192.121:8300[Candidate] entering Candidate state

   2016/07/05 21:24:02 [INFO] raft: Node at 192.168.192.121:8300 [Follower]entering Follower state

   2016/07/05 21:24:02 [INFO]consul: New leader elected: node0

 

3   参考文献

[01]http://www.cnblogs.com/yatingyang/articles/4495098.html


你可能感兴趣的:(微服务,Consul修炼)