部署kubernetes 集群-二进制方式(一)

一、环境准备

Kubernetes集群部署架构规划:

1.1 操作系统:

CentOS Linux release 7.4

1.2 软件版本:

Docker 18.09.0-ce
Kubernetes 1.11

1.3 服务器角色、IP、组件:

k8s-master1:192.168.0.150   kube-apiserver,kube-controller-manager,kube-scheduler,etcd,Dashboard
k8s-node1:192.168.0.151 kubelet,kube-proxy,docker,flannel,etcd
k8s-node2:192.168.0.152 kubelet,kube-proxy,docker,flannel,etcd

固定IP、改名字、相互解析、关闭防火墙、做个快照(三台服务器都做)

[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33   #配置静态IP
TYPE="Ethernet"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
BOOTPROTO="static"
IPADDR=192.168.0.150
GATEWAY=192.168.0.1
NETMASK=255.255.255.0
DNS1=223.5.5.5
DNS2=223.6.6.6
[root@localhost ~]# systemctl restart network   #重启网络服务
[root@localhost ~]# hostnamectl set-hostname k8s-master    #修改主机名
[root@k8s-master ~]# vim /etc/hosts    #解析
192.168.0.150  k8s-master
192.168.0.151  k8s-node1
192.168.0.152  k8s-node2
[root@k8s-master ~]# systemctl stop firewalld    #关闭防火墙和SELinux
[root@k8s-master ~]# systemctl disable firewalld
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# vim /etc/selinux/config
image.png

二、部署Etcd集群

2.1 使用cfssl来生成自签证书,下载cfssl工具:

只在主服务器生成SSL证书即可

#下载cfssl工具
[root@k8s-master ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master ~]# ls
anaconda-ks.cfg             cfssljson_linux-amd64 
cfssl-certinfo_linux-amd64  cfssl_linux-amd64
[root@k8s-master ~]# chmod +x cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 cfssl_linux-amd64      #添加执行权限
[root@k8s-master ~]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@k8s-master ~]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@k8s-master ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
[root@k8s-master ~]# mkdir /opt/ssl/etcd -p    #创建etcd证书的存放目录
[root@k8s-master ~]# cd /opt/ssl/etcd/

2.2 配置生成Etcd证书需要的三个文件:

[root@k8s-master etcd]# vim ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
[root@k8s-master etcd]# vim ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
[root@k8s-master etcd]# vim server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "192.168.0.150",
    "192.168.0.151",
    "192.168.0.152"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

2.3 生成证书:

[root@k8s-master etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[root@k8s-master etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
image.png
[root@k8s-master etcd]# ls *pem
ca-key.pem  ca.pem  server-key.pem  server.pem

2.4 安装Etcd:

二进制包下载地址:
https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz
以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前的:

2.4.1 解压二进制包:
[root@k8s-master ~]# mkdir /opt/etcd/{bin,cfg,ssl} -p
[root@k8s-master ~]# tar -xvzf etcd-v3.2.12-linux-amd64.tar.gz 
[root@k8s-master ~]# mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
[root@k8s-master ~]# ls /opt/etcd/bin/
etcd  etcdctl
2.4.2 创建etcd配置文件:
[root@k8s-master ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.150:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.150:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.150:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.150:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.150:2380,etcd02=https://192.168.0.151:2380,etcd03=https://192.168.0.152:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
image.png
* ETCD_NAME 节点名称    修改
* ETCD_DATA_DIR 数据目录
* ETCD_LISTEN_PEER_URLS 集群通信监听地址   修改
* ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址   修改
* ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址  修改
* ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址     修改
* ETCD_INITIAL_CLUSTER 集群节点地址             修改
* ETCD_INITIAL_CLUSTER_TOKEN 集群Token
* ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群
2.4.3 systemd管理etcd:

可以使用systemctl命令启动关闭etcd

[root@k8s-master ~]# vim /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
2.4.4 把刚才生成的证书拷贝到配置文件中的位置:
[root@k8s-master etcd]# pwd
/opt/ssl/etcd
[root@k8s-master etcd]# cp *.pem /opt/etcd/ssl/
[root@k8s-master etcd]# ls /opt/etcd/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem
[root@k8s-master etcd]# scp *pem k8s-node1:/opt/etcd/ssl/
[root@k8s-master etcd]# scp *pem k8s-node2:/opt/etcd/ssl/
[root@k8s-node1 ~]# ls /opt/etcd/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem
2.4.5 启动并设置开启启动:

先启动node节点

# systemctl start etcd
# systemctl enable etcd
2.4.6 检查etcd集群状态:
[root@k8s-master ~]# vim check_etcd
/opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.0.150:2379,https://192.168.0.151:2379,https://192.168.0.152:2379" \
cluster-health

[root@k8s-master ~]# sh check_etcd
member 4f049cf7fa3eb446 is healthy: got healthy result from https://192.168.0.152:2379
member b7eac1b154cd2829 is healthy: got healthy result from https://192.168.0.151:2379
member d501b26d8a46794b is healthy: got healthy result from https://192.168.0.150:2379
cluster is healthy
如果输出上面信息,就说明集群部署成功。

2.5 如果报错

如果有问题第一步先看日志:/var/log/messages 或 journalctl -xeu etcd
报错:
Jan 15 12:06:55 k8s-master1 etcd: request cluster ID mismatch (got 99f4702593c94f98 want cdf818194e3a8c32)
解决:因为集群搭建过程,单独启动过单一etcd,做为测试验证,集群内第一次启动其他etcd服务时候,是通过发现服务引导的,所以需要删除旧的成员信息,所有节点作以下操作

[root@k8s-master1 default.etcd]# pwd
/var/lib/etcd/default.etcd
[root@k8s-master1 default.etcd]# rm -rf member/

三、在Node节点安装Docker

注意是node节点安装,我们的集群现在有两个node节点,都要安装

[root@k8s-node1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-node1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-node1 ~]# yum makecache fast
[root@k8s-node1 ~]# yum -y install docker-ce
[root@k8s-node1 ~]# mkdir -p /etc/docker
[root@k8s-node1 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://2qu17v71.mirror.aliyuncs.com"]
}
EOF
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl start docker
[root@k8s-node1 ~]# systemctl enable docker

四、部署Flannel网络

让不同的docker里面的容器能够互相通信

4.1 Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段(master节点):

#先要切换到etc认证目录,即/opt/etcd/ssl/
[root@k8s-master ssl]# pwd
/opt/etcd/ssl
[root@k8s-master ssl]# /opt/etcd/bin/etcdctl \
> --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
> --endpoints="https://192.168.0.150:2379,https://192.168.0.151:2379,https://192.168.0.152:2379" \
> set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
image.png

以下部署步骤在规划的每个node节点都操作。

4.2 下载解压二进制包:

[root@k8s-node1 ~]# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
[root@k8s-node1 ~]# tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
[root@k8s-node1 ~]# mkdir -pv /opt/kubernetes/bin
[root@k8s-node1 ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin

4.3 配置Flannel:

[root@k8s-node1 ~]# mkdir -pv /opt/kubernetes/cfg/
mkdir: created directory ‘/opt/kubernetes/cfg/’
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.0.150:2379,https://192.168.0.151:2379,https://192.168.0.152:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"

4.4 systemd管理Flannel:

[root@k8s-node1 ~]# vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

4.5 配置Docker启动指定子网段:

[root@k8s-node1 ~]# vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

4.6从master节点拷贝证书文件到node1和2上:因为node1和2上没有证书,但是flanel需要证书

因为本实验中,把etcd跟flannel部署在一台服务器上了,部署etcd的时候已经把证书拷贝过来了,所以此处不用再次拷贝。但是实际生产中,etcd集群是单独部署出来的,所以此处需要再把证书拷贝到node节点

4.7 重启flannel和docker:

[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl start flanneld
[root@k8s-node1 ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-node1 ~]# systemctl restart docker

4.8 检查是否生效:

[root@k8s-node2 ~]# ps -ef |grep docker
root      11621      1  0 21:20 ?        00:00:00 /usr/bin/dockerd --bip=172.17.101.1/24 --ip-masq=false --mtu=1450
root      11817  11450  0 21:21 pts/0    00:00:00 grep --color=auto docker
[root@k8s-node2 ~]# ip a
image.png

image.png

确保docker0与flannel.1在同一网段。

测试不同节点互通,在当前节点访问另一个Node节点docker0 IP:

[root@k8s-node1 ~]# ping 172.17.101.1
PING 172.17.101.1 (172.17.101.1) 56(84) bytes of data.
64 bytes from 172.17.101.1: icmp_seq=1 ttl=64 time=0.861 ms
64 bytes from 172.17.101.1: icmp_seq=2 ttl=64 time=0.275 ms
[root@k8s-node2 ~]# ping 172.17.28.1
PING 172.17.28.1 (172.17.28.1) 56(84) bytes of data.
64 bytes from 172.17.28.1: icmp_seq=1 ttl=64 time=0.578 ms
64 bytes from 172.17.28.1: icmp_seq=2 ttl=64 time=0.583 ms

如果能通说明Flannel部署成功。

下面的步骤请接:部署kubernetes 集群-二进制方式(二)

你可能感兴趣的:(部署kubernetes 集群-二进制方式(一))