1.VagrantFile
---------------------------
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.require_version ">= 1.6.0"
boxes = [
{
:name => "swarm-manager",
:eth1 => "192.168.205.20",
:mem => "1024",
:cpu => "1"
},
{
:name => "swarm-worker1",
:eth1 => "192.168.205.21",
:mem => "1024",
:cpu => "1"
},
{
:name => "swarm-worker2",
:eth1 => "192.168.205.22",
:mem => "1024",
:cpu => "1"
}
]
Vagrant.configure(2) do |config|
config.vm.box = "centos/7"
boxes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.hostname = opts[:name]
config.vm.provider "vmware_fusion" do |v|
v.vmx["memsize"] = opts[:mem]
v.vmx["numvcpus"] = opts[:cpu]
end
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", opts[:mem]]
v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
end
config.vm.network :private_network, ip: opts[:eth1]
end
end
config.vm.synced_folder "./labs", "/home/vagrant/labs"
config.vm.provision "shell", privileged: true, path: "./setup.sh"
end
2. 启动
vagrant up
报错
Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attempted was:
mount -t vboxsf -o uid=1000,gid=1000 home_vagrant_labs /home/vagrant/labs
The error output from the command was:
mount: unknown filesystem type 'vboxsf'
解决办法:在powershell中运行:
(1)vagrant plugin install vagrant-vbguest
***
PS ...\docker\...> vagrant plugin install vagrant-vbguest
Installing the 'vagrant-vbguest' plugin. This can take a few minutes...
Fetching: micromachine-3.0.0.gem (100%)
Fetching: vagrant-vbguest-0.19.0.gem (100%)
Installed the plugin 'vagrant-vbguest (0.19.0)'!
***
(2)查看:vagrant reload --provision
vagrant reload --provision
==> swarm-manager: VM not created. Moving on...
==> swarm-worker1: VM not created. Moving on...
==> swarm-worker2: VM not created. Moving on...
3.vagrant status 查看安装情况
vagrant status
Current machine states:
swarm-manager running (virtualbox)
swarm-worker1 running (virtualbox)
swarm-worker2 running (virtualbox)
4.连接三台
vagrant ssh swarm-manager
vagrant ssh swarm-worker1
vagrant ssh swarm-worker2
1.查看帮助文档
[vagrant@swarm-manager ~]$ docker swarm init --help
Usage: docker swarm init [OPTIONS]
Initialize a swarm
Options:
--advertise-addr string Advertised address (format: [:port])
--autolock Enable manager autolocking (requiring an unlock key to start a
stopped manager)
--availability string Availability of the node ("active"|"pause"|"drain") (default "active")
--cert-expiry duration Validity period for node certificates (ns|us|ms|s|m|h) (default
2160h0m0s)
--data-path-addr string Address or interface to use for data path traffic (format:
)
--default-addr-pool ipNetSlice default address pool in CIDR format (default [])
--default-addr-pool-mask-length uint32 default address pool subnet mask length (default 24)
--dispatcher-heartbeat duration Dispatcher heartbeat period (ns|us|ms|s|m|h) (default 5s)
--external-ca external-ca Specifications of one or more certificate signing endpoints
--force-new-cluster Force create a new cluster from current state
--listen-addr node-addr Listen address (format: [:port]) (default 0.0.0.0:2377)
--max-snapshots uint Number of additional Raft snapshots to retain
--snapshot-interval uint Number of log entries between Raft snapshots (default 10000)
--task-history-limit int Task history retention limit (default 5)
2.查看manager本地地址
[vagrant@swarm-manager ~]$ ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:26:10:60 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
valid_lft 84817sec preferred_lft 84817sec
inet6 fe80::5054:ff:fe26:1060/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:d0:0a:34 brd ff:ff:ff:ff:ff:ff
inet 192.168.205.20/24 brd 192.168.205.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fed0:a34/64 scope link
valid_lft forever preferred_lft forever
4: docker0: mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:75:05:97:b1 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
3.广播manager地址,在manager上运行
[vagrant@swarm-manager ~]$ docker swarm init --advertise-addr=192.168.205.20
Swarm initialized: current node (bq44d6sisyp1qphp378h7cgun) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-2yg95j3wvmq7z6201zc2tolvuy845c070cfjryndy2zuz3evrl-1f7du2ta0b9pvu49qtc3rf5d3 192.168.205.20:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
所以,添加一个节点,需要运行
docker swarm join --token SWMTKN-1-2yg95j3wvmq7z6201zc2tolvuy845c070cfjryndy2zuz3evrl-1f7du2ta0b9pvu49qtc3rf5d3 192.168.205.20:2377
4.添加work节点命令(从上面的复制命令)
(1)进入worker1
vagrant ssh swarm-worker1
(2)开启docker(开启的话,不用管)
查看docker状态,没有开启的话打开
systemctl status docker
systemctl start docker
docker version
(3)复制上述命令
[vagrant@swarm-worker1 ~]$ docker swarm join --token SWMTKN-1-2yg95j3wvmq7z6201zc2tolvuy845c070cfjryndy2zuz3evrl-1f7du2ta0b9pvu49qtc3rf5d3 192.168.205.20:2377
This node joined a swarm as a worker.
5.进入worker2,同样命令加入集群
6.查看集群状态(只能在manager上查看)
(1)进入mananger
vagrant ssh swarm-manager
(2)查看帮助
docker swarm --help
(3)查看节点
[vagrant@swarm-manager ~]$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
bq44d6sisyp1qphp378h7cgun * swarm-manager Ready Active Leader 18.09.7
kfsv1hmilij4ksn5nlbbfvzuk swarm-worker1 Ready Active 18.09.7
999r8orw4045nwj9l1kyjh2pl swarm-worker2 Ready Active 18.09.7