系统 centos7.3
master 192.168.18.103 (所安装的软件:etcd,kube-apiserver,kube-controller-manager,kube-scheduler)
node1 192.168.18.104 (所安装的软件:etcd,docker,kublet,kube-proxy,flannel)
node2 192.168.18.105 (所安装的软件:etcd,docker,kublet,kube-proxy,flannel)
注意,这里我为了节省机器etcd这个我安装到了node节点上面,正常的话应该找单独的机器安装,三台)
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y
# 设置docker加速器
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s
http://bc437cce.m.daocloud.io
systemctl start docker
systemctl enable docker
docker version #查看版本
1 安装证书生成工具cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
2 生成证书过程,注意在那个目录下面运行命令证书就会产生在那个目录下面,我这里放到了/home/yx/ssl下面
cat > ca-config.json < ca-csr.json < server-csr.json <
1 建立目录,
mkdir -p /home/yx/{ssl,bin,cfg}
2 下载etcd,解压,并移动到指定目录
下载地址 https://github.com/etcd-io/etcd/releases,我这里安装的是etcd-v3.2.12-linux-amd64.tar.gz
移动
mv etcd-v3.2.12-linux-amd64/etcd /home/yx/etcd/bin/
mv etcd-v3.2.12-linux-amd64/etcdctl /home/yx/etcd/bin/
3 配置文件
# 里面的ip根据自己的情况来定
[root@tidb-tidb-03 ~]# cat /home/yx/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/home/yx/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.18.103:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.18.103:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.18.103:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.18.103:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.18.103:2380,etcd02=https://192.168.18.104:2380,etcd03=https://192.168.18.105:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new
4 拷贝证书
#把生成的证书拷贝到etcd的ssl目录下面
cp -r /home/yx/ssl/*.pem /home/yx/etcd/ssl
5 配置etcd启动脚本
cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/home/yx/etcd/cfg/etcd #这个是你配置文件的路径
ExecStart=/home/yx/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-state=new \
--cert-file=/home/yx/etcd/ssl/server.pem \
--key-file=/home/yx/etcd/ssl/server-key.pem \
--peer-cert-file=/home/yx/etcd/ssl/server.pem \
--peer-key-file=/home/yx/etcd/ssl/server-key.pem \
--trusted-ca-file=/home/yx/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/home/yx/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
6 启动并拷贝启动脚本和配置文件到其余两台机器上面
systemctl start etcd
systemctl enable etcd
ps -ef | grep etcd
注意:如果只是其中一台启动,日志里面可能会提示信息,需要三台全部启动完毕即可。
# 如果启动报错,可以查看/var/log/message/ 或者journalctl -u etcd 查看错误信息
scp -r /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
scp -r /home/yx/etcd/ [email protected]:/home/yx/ #这个里面包含了配置文件,证书等
7 检查etcd集群状态:
检测之前要进去到证书的目录下面,cd /home/yx/etcd/ssl
/home/yx/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key-endpoints="https://192.168.18.103:2379,https://192.168.18.104:2379,https://192.168.18.1
member 257fc3f3cf354f5 is healthy: got healthy result from https://192.168.18.103:2379
member 49ea5453c8e9f751 is healthy: got healthy result from https://192.168.18.105:2379
member 59f641f0f8e3fd7a is healthy: got healthy result from https://192.168.18.104:2379
cluster is healthy
如果输出上面信息,就说明集群部署成功
1 在master上面操作, 写入分配的子网段到etcd,供flanneld使用
[yx@tidb-tidb-03 ssl]$ /home/yx/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.18.103:2379,https://192.168.18.104:2379,https://192.168.18.105:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
#产生下面信息为正常
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
2 创建目录 (在所有node节点上面操作)
mkdir -p /home/yx/flannel/{cfg,ssl,bin}
3 下载并解压移动
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar zxvf flannel-v0.9.1-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /home/yx/flannel/bin
4 配置flannel配置文件
cat /home/yx/flannel/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.18.103:2379,https://192.168.18.104:2379,https://192.168.18.105:2379 -etcd-cafile=/home/yx/etcd/ssl/ca.pem -etcd-certfile=/home/yx/etcd/ssl/server.pem -etcd-keyfile=/home/yx/etcd/ssl/server-key.pem"
5 配置systemd管理Flannel用户启动flannel
# 注意里面的目录一定要写正确,不然会各种报错
[yx@tidb-tikv-01 ~]$ cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/home/yx/flannel/cfg/flanneld
ExecStart=/home/yx/flannel/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/home/yx/flannel/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
6 配置Docker启动指定子网段:,更改之前先备份一下文件
[yx@tidb-tikv-01 ~]$ cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
7 启动flannel和docker
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker
8 验证
[yx@tidb-tikv-01 ~]$ ps -ef |grep docker
root 977 1 0 17:19 ? 00:00:02 /usr/bin/dockerd --bip=172.17.62.1/24 --ip-masq=false --mtu=1450
root 984 977 0 17:19 ? 00:00:03 containerd --config /var/run/docker/containerd/containerd.toml --log-level info
yx 11000 30416 0 17:54 pts/0 00:00:00 grep --color=auto docker
再用ip add验证一下,确保docker0与flannel.1在同一网段。
如下图所示
然后在当前节点去ping另一个节点的docker0的ip,是通的
9 没问题的话拷贝文件和脚本到另一个node节点上面
scp -r flannel/ [email protected]:/home/yx/
scp -r /usr/lib/systemd/system/flanneld.service [email protected]:/usr/lib/systemd/system/
scp -r /usr/lib/systemd/system/docker.service [email protected]:/usr/lib/systemd/system/