一、说明:
1)实验条件:四台虚拟机(Ubuntu-14.04)
2)四台虚拟机最好不要互相拷贝,需要分别安装
3)四台虚拟机的IP分别是:192.168.110.132、192.168.110.136、192.168.110.137、192.168.110.138
二、实验步骤
1.利用Vmvare搭建四台虚拟机
2.分别在四台虚拟机上安装docker
注意:本步骤请参考官方权威档:https://docs.docker.com/engine/installation/linux/ubuntulinux/
3.在四台虚拟主机上分别做如下配置:
编辑/etc/default/docker文件,然后建下面的配置内容加载文件的末尾:
1)在132机器上面加如下内容:
DOCKER_OPTS="--label com.example.storage=managerpri --cluster-store=consul://192.168.110.132:8500 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"
2)在136机器上面加如下内容:
DOCKER_OPTS="--label com.example.storage=managerbak --cluster-store=consul://192.168.110.132:8500 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"
3)在137机器上面加如下内容:
DOCKER_OPTS="--label com.example.storage=ngnix-php --cluster-store=consul://192.168.110.132:8500 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"
4)在138机器上面加如下内容:
DOCKER_OPTS="--label com.example.storage=mysql --cluster-store=consul://192.168.110.132:8500 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"
注意:
--label com.example.storage=##是给每个docker daemon打标签,可以方便compose编排服务的时候进行- "constraint:com.example.storage==##"的配置
4.分别在192.168.110.132、192.168.110.136、192.168.110.137、192.168.110.138上面部署一下内容:manager(主)和consul、manager(备)、swarm node、swarm node
1)在132虚拟机上创建consul容器
docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise 192.168.110.132:4000 consul://192.168.110.132:8500
docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise 192.168.110.136:4000 consul://192.168.110.132:8500
4)在137机器上创建swarm node容器
docker run -d swarm join --advertise=192.168.110.137:2375 consul://192.168.110.132:8500
docker run -d swarm join --advertise=192.168.110.138:2375 consul://192.168.110.132:8500
5、验证集群环境是否搭建成功
在132机器上面执行如下命令:$ docker -H :4000 info
如果安装成功,会看见如下的输出:
Containers: 23
Running: 3
Paused: 0
Stopped: 20
Images: 10
Server Version: swarm/1.2.4
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
dockertest3: 192.168.110.137:2375
└ ID: TCBC:FWLT:FVQX:3AGB:ZPIY:NASL:JRLW:4VE7:UBQI:V2M6:2QT7:QHVF
└ Status: Healthy
└ Containers: 16 (1 Running, 0 Paused, 15 Stopped)
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 2.034 GiB
└ Labels: com.example.storage=ngnix-php, kernelversion=4.2.0-27-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
└ UpdatedAt: 2016-08-08T11:10:17Z
└ ServerVersion: 1.12.0
dockertest4: 192.168.110.138:2375
└ ID: 5F4D:TSPS:45B5:SVM5:MP2G:OG7M:K4Y4:URS7:NSE2:3HUI:C7P5:D5OF
└ Status: Healthy
└ Containers: 7 (2 Running, 0 Paused, 5 Stopped)
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 2.034 GiB
└ Labels: com.example.storage=mysql, kernelversion=4.2.0-27-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
└ UpdatedAt: 2016-08-08T11:10:38Z
└ ServerVersion: 1.12.0
Plugins:
Volume:
Network:
Swarm:
NodeID:
Is Manager: false
Node Address:
Security Options:
Kernel Version: 4.2.0-27-generic
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 4.068 GiB
Name: c03ecbd3f190
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
WARNING: No kernel memory limit support
【说明】这步很重要,如果不需要,所有的命令:docker run 、docker info等都是对单个节点而言,并不是对swarm集群而言的
vim /etc/profile
export DOCKER_HOST=:4000
6、实验
1)在132虚拟上创建/home/xuguokun/elkcompose目录
2)在/home/xuguokun/elkcompose目录下创建logstash文件夹和elasticsearch文件夹
3)在/home/xuguokun/elkcompose/elasticsearch目录下创建两个文件Dockerfile.elasticsearch和elasticsearch.yml,文件的内容如下
Dockerfile.elasticsearch文件内容如下:
FROM elasticsearch:latest
elasticsearch.yml文件的内容如下:
node.name: elasticsearch
4)在/home/xuguokun/elkcompose/logstash目录下创建central.conf和Dockerfile.logstash文件,文件的内容分别如下:
Dockerfile.logstash文件内容如下:
FROM logstash:latest
COPY ./central.conf /conf/
input
{
file{
path => "/var/data/test.txt"
}
}
output{
stdout{}
elasticsearch{
hosts => "elasticsearch:9200"
}
}
5)在/home/xuguokun/elkcompose/目录下创建docker-compose.yml文件,文件内容如下:
version: '2'
services:
elasticsearch:
image: xuguokun/elasticsearch
build:
context: ./elasticsearch
dockerfile: Dockerfile.elasticsearch
ports:
- "9200:9200"
- "9300:9300"
command: elasticsearch
networks:
- frontend
depends_on:
- logstash
environment:
- "constraint:com.example.app==ngnix-php"
- ES_CLUSTERNAME=elasticsearch
logstash:
image: xuguokun/logstash
build:
context: ./logstash
dockerfile: Dockerfile.logstash
ports:
- "25826:25826"
- "25826:25826/udp"
volumes:
#- logstash-data:/var/data
- /var/data:/var/data
command: logstash -f /conf/central.conf
networks:
- frontend
environment:
- "constraint:com.example.db==mysql"
#volumes:
#logstash-data: {}
networks:
frontend:
注意:需在在132机器的上创建/var/data目录,然后在该目录下创建test.txt文件,文件的内容是1,2,3,4
6)在/home/xuguokun/elkcompose/目录下执行docker-compose up命令
7)查看实验结果: