二进制部署k8s

二进制部署k8s

一、安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  一台或多台机器,操作系统 CentOS7.x-86_x64

1、硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多

2、可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点

3、禁止swap分区

二、单Master服务器规划

k8s-master    192.168.31.71    kube-apiserver,kube-controller-manager,kube-scheduler,etcd

k8s-node1    192.168.31.72    kubelet,kube-proxy,docker etcd

k8s-node2    192.168.31.73    kubelet,kube-proxy,docker,etcd

高可用集群规划(在单点上扩展的)

192.168.10.136  master1

192.168.10.137  node

192.168.10.138  node

192.168.10.139  master2

192.168.10.140  load

192.168.10.141  load

192.168.10.142  vip

三、操作系统初始化

#关闭防火墙

systemctl stop firewalld

systemctl disable firewalld

# 关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久

setenforce 0  # 临时

#关闭swap

swapoff -a  # 临时

sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

#根据规划设置主机名

hostnamectl set-hostname

#在master添加hosts

cat >> /etc/hosts << EOF

192.168.10.136 master1

192.168.10.137 node1

192.168.10.138 node2

EOF

#将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system  # 生效

#时间同步

yum install ntpdate -y

ntpdate time.windows.com

四、部署Etcd集群(三台机器都要部署,找任意一台服务器操作,这里用Master节点)

1、准备cfssl证书生成工具,cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl

mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2、生成Etcd证书

自签证书颁发机构(CA)

①创建工作目录:

mkdir -p ~/TLS/{etcd,k8s}

cd TLS/etcd

②自签CA:

vi  ca-config.json

{

  "signing": {

    "default": {

      "expiry": "87600h"

    },

    "profiles": {

      "www": {

         "expiry": "87600h",

         "usages": [

            "signing",

            "key encipherment",

            "server auth",

            "client auth"

        ]

      }

    }

  }

}

vi  ca-csr.json

{

    "CN": "etcd CA",

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "Beijing",

            "ST": "Beijing"

        }

    ]

}

③生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem  看下数字证书ca-key.pem和ca.pem是否生成

3、使用自签CA签发Etcd HTTPS证书(上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,为了方便后期扩容可以多写几个预留的IP)

vi  server-csr.json

{

    "CN": "etcd",

    "hosts": [

    "192.168.10.136",

    "192.168.10.137",

    "192.168.10.138"

    ],

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "BeiJing",

            "ST": "BeiJing"

        }

    ]

}

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem  看下数字证书server-key.pem和server.pem是否生成

4、下载etcd二进制包,并部署(以下在master上操作,为简化操作,待会将master上生成的所有文件拷贝etcd其他节点)

地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

①、创建工作目录并解压二进制包

mkdir /opt/etcd/{bin,cfg,ssl} -p

tar xf etcd-v3.4.9-linux-amd64.tar.gz

mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

②、创建etcd配置文件

vi  /opt/etcd/cfg/etcd.conf

#[Member]

ETCD_NAME="etcd-1"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.10.136:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.10.136:2379"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.10.136:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.10.136:2379"

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.10.136:2380,etcd-2=https://192.168.10.137:2380,etcd-3=https://192.168.10.138:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

注释:

ETCD_NAME:节点名称,集群中唯一

ETCD_DATA_DIR:数据目录

ETCD_LISTEN_PEER_URLS:集群通信监听地址

ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址

ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址

ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址

ETCD_INITIAL_CLUSTER:集群节点地址

ETCD_INITIAL_CLUSTER_TOKEN:集群Token

ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

③、systemd管理etcd

vi  /usr/lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

[Service]

Type=notify

EnvironmentFile=/opt/etcd/cfg/etcd.conf

ExecStart=/opt/etcd/bin/etcd \

--cert-file=/opt/etcd/ssl/server.pem \

--key-file=/opt/etcd/ssl/server-key.pem \

--peer-cert-file=/opt/etcd/ssl/server.pem \

--peer-key-file=/opt/etcd/ssl/server-key.pem \

--trusted-ca-file=/opt/etcd/ssl/ca.pem \

--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \

--logger=zap

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

④、把刚才生成的证书拷贝到配置文件中的路径

cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

⑤、启动并设置开机启动

systemctl daemon-reload && systemctl start etcd && systemctl enable etcd

⑥、将master上所有生成的文件拷贝到node1和node2上

scp -r /opt/etcd [email protected]:/opt/

scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

scp -r /opt/etcd [email protected]:/opt/

scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

⑦、然后在node1和node2上分别修改etcd.conf配置文件中的节点名称和当前服务器IP

vi /opt/etcd/cfg/etcd.conf

#[Member]

ETCD_NAME="etcd-1"  #修改此处,节点2改为etcd-2,节点3改为etcd-3

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380"    #修改此处为当前服务器IP

ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379"  #修改此处为当前服务器IP

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380"  #修改此处为当前服务器IP

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379"        #修改此处为当前服务器IP

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

最后启动etcd并设置开机启动,同上

⑧、查看集群状态(成功会返回healthy:successfully状态)

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.10.136:2379,https://192.168.10.137:2379,https://192.168.10.138:2379" endpoint health

五、安装Docker(以下在所有节点操作,这里采用二进制安装,用yum安装也一样)

下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

1、解压二进制包

tar xf docker-19.03.9.tgz

mv docker/* /usr/bin

2、systemd管理docker

vi  /usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

After=network-online.target firewalld.service

Wants=network-online.target

[Service]

Type=notify

ExecStart=/usr/bin/dockerd

ExecReload=/bin/kill -s HUP $MAINPID

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

TimeoutStartSec=0

Delegate=yes

KillMode=process

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s

[Install]

WantedBy=multi-user.target

3、创建配置文件

mkdir /etc/docker

vi  /etc/docker/daemon.json

{

  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]

}

4、启动并设置开机启动

systemctl daemon-reload && systemctl start docker && systemctl enable docker

六、部署Master

1、生成kube-apiserver证书

自签证书颁发机构(CA)

cd TLS/k8s

vi ca-config.json

{

  "signing": {

    "default": {

      "expiry": "87600h"

    },

    "profiles": {

      "kubernetes": {

         "expiry": "87600h",

         "usages": [

            "signing",

            "key encipherment",

            "server auth",

            "client auth"

        ]

      }

    }

  }

}

vi  ca-csr.json

{

    "CN": "kubernetes",

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "Beijing",

            "ST": "Beijing",

            "O": "k8s",

            "OU": "System"

        }

    ]

}

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem  看下数字证书ca-key.pem和ca.pem是否生成

2、使用自签CA签发kube-apiserver HTTPS证书

创建证书申请文件:

vi  server-csr.json

{

    "CN": "kubernetes",

    "hosts": [

      "10.0.0.1",

      "127.0.0.1",

      "192.168.10.136",

      "192.168.10.137",

      "192.168.10.138",

      "192.168.10.139",

      "192.168.10.140",

      "192.168.10.141",

      "192.168.10.142",

      "kubernetes",

      "kubernetes.default",

      "kubernetes.default.svc",

      "kubernetes.default.svc.cluster",

      "kubernetes.default.svc.cluster.local"

    ],

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "BeiJing",

            "ST": "BeiJing",

            "O": "k8s",

            "OU": "System"

        }

    ]

}

注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

ls server*pem  看下数字证书server-key.pem和server.pem是否生成

3、下载可以部署master和node的二进制文件(打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker二进制文件。)

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183

①、解压二进制包

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

tar xf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin

cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin

cp kubectl /usr/bin/

4、部署kube-apiserver

①、创建配置文件(注:上面两个\ \第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。)

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF

KUBE_APISERVER_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--etcd-servers=https://192.168.10.136:2379,https://192.168.10.137:2379,https://192.168.10.138:2379 \\

--bind-address=192.168.10.136 \\

--secure-port=6443 \\

--advertise-address=192.168.10.136 \\

--allow-privileged=true \\

--service-cluster-ip-range=10.0.0.0/24 \\

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\

--authorization-mode=RBAC,Node \\

--enable-bootstrap-token-auth=true \\

--token-auth-file=/opt/kubernetes/cfg/token.csv \\

--service-node-port-range=30000-32767 \\

--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\

--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\

--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\

--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\

--client-ca-file=/opt/kubernetes/ssl/ca.pem \\

--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\

--etcd-cafile=/opt/etcd/ssl/ca.pem \\

--etcd-certfile=/opt/etcd/ssl/server.pem \\

--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\

--audit-log-maxage=30 \\

--audit-log-maxbackup=3 \\

--audit-log-maxsize=100 \\

--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

EOF

注释:

logtostderr:启用日志

-v:日志等级

log-dir:日志目录

etcd-servers:etcd集群地址

bind-address:监听地址

secure-port:https安全端口

advertise-address:集群通告地址

allow-privileged:启用授权

service-cluster-ip-range:Service虚拟IP地址段

enable-admission-plugins:准入控制模块

authorization-mode:认证授权,启用RBAC授权和节点自管理

enable-bootstrap-token-auth:启用TLS bootstrap机制

token-auth-file:bootstrap token文件

service-node-port-range:Service nodeport类型默认分配端口范围

kubelet-client-xxx:apiserver访问kubelet客户端证书

tls-xxx-file:apiserver https证书

etcd-xxxfile:连接Etcd集群证书

audit-log-xxx:审计日志、

②、把刚才生成的证书拷贝到配置文件中的路径:

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

③、启用 TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

④、创建上述配置文件中token文件(格式:token,用户名,UID,用户组)

cat > /opt/kubernetes/cfg/token.csv << EOF

c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"

EOF 

token也可自行生成替换(获取随机token)

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

⑤、systemd管理apiserver

vi  /usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf

ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

⑥、启动并设置开机启动

systemctl daemon-reload && systemctl start kube-apiserver && systemctl enable kube-apiserver

⑦、授权kubelet-bootstrap用户允许请求证书(kubectl命令需要配置config,要不然下面命令不可用)

kubectl create clusterrolebinding kubelet-bootstrap \

--clusterrole=system:node-bootstrapper \

--user=kubelet-bootstrap

⑧、配置kubectl,创建kubeconfig文件(注意命令执行的位置,要在证书所在目录下)

#生成管理员证书(在mater节点上操作)

cd  /root/TLS/k8s

vi  admin-csr.json

{

  "CN": "admin",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "L": "BeiJing",

      "ST": "BeiJing",

      "O": "system:masters",

      "OU": "System"

    }

  ]

}

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#设置集群参数

kubectl config set-cluster kubernetes \

  --server=https://192.168.10.136:6443 \

  --certificate-authority=ca.pem \

  --embed-certs=true \

  --kubeconfig=config

#设置客户端认证参数

kubectl config set-credentials cluster-admin \

  --certificate-authority=ca.pem \

  --embed-certs=true \

  --client-key=admin-key.pem \

  --client-certificate=admin.pem \

  --kubeconfig=config

#设置上下文参数

kubectl config set-context default \

  --cluster=kubernetes \

  --user=cluster-admin \

  --kubeconfig=config

#设置默认上下文

kubectl config use-context default --kubeconfig=config

#使命令生效

mv config /root/.kube/

5、部署kube-controller-manager

①、创建配置文件

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--leader-elect=true \\

--master=127.0.0.1:8080 \\

--bind-address=127.0.0.1 \\

--allocate-node-cidrs=true \\

--cluster-cidr=10.244.0.0/16 \\

--service-cluster-ip-range=10.0.0.0/24 \\

--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\

--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\

--root-ca-file=/opt/kubernetes/ssl/ca.pem \\

--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\

--experimental-cluster-signing-duration=87600h0m0s"

EOF

注释:

master:通过本地非安全本地端口8080连接apiserver。

leader-elect:当该组件启动多个时,自动选举(HA)

cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

②、systemd管理controller-manager

vi  /usr/lib/systemd/system/kube-controller-manager.service

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf

ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

③、启动并设置开机启动

systemctl daemon-reload && systemctl start kube-controller-manager && systemctl enable kube-controller-manager

systemctl status kube-controller-manager

6、部署kube-scheduler

①、创建配置文件

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF

KUBE_SCHEDULER_OPTS="--logtostderr=false \

--v=2 \

--log-dir=/opt/kubernetes/logs \

--leader-elect \

--master=127.0.0.1:8080 \

--bind-address=127.0.0.1"

EOF

注释:

master:通过本地非安全本地端口8080连接apiserver。

leader-elect:当该组件启动多个时,自动选举(HA)

②、systemd管理kube-scheduler

vi  /usr/lib/systemd/system/kube-scheduler.service

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf

ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

③、启动并设置开机启动

systemctl daemon-reload && systemctl start kube-scheduler && systemctl enable kube-scheduler

systemctl status kube-scheduler

④、查看集群状态

所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:

kubectl get cs

七、部署node节点

1、创建目录&从master拷贝命令到node上

①、在所有node节点创建工作目录

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

②、从master节点拷贝:

scp /opt/kubernetes/ssl/ca.pem 192.168.10.137:/opt/kubernetes/ssl/

cd /root/kubernetes/server/bin

scp kubelet kube-proxy 192.168.10.137:/opt/kubernetes/bin

scp kubelet kube-proxy 192.168.10.138:/opt/kubernetes/bin

scp kubectl 192.168.10.137:/usr/bin

scp kubectl 192.168.10.138:/usr/bin

2、部署kubelet

①、创建配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF

KUBELET_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--hostname-override=node1 \\   #此处需要修改为节点主机名

--network-plugin=cni \\

--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\

--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\

--config=/opt/kubernetes/cfg/kubelet-config.yml \\

--cert-dir=/opt/kubernetes/ssl \\

--pod-infra-container-image=lizhenliang/pause-amd64:3.0"

EOF

注释:

hostname-override:显示名称,集群中唯一

network-plugin:启用CNI

kubeconfig:空路径,会自动生成,后面用于连接apiserver

bootstrap-kubeconfig:首次启动向apiserver申请证书

config:配置参数文件

cert-dir:kubelet证书生成目录

pod-infra-container-image:管理Pod网络容器的镜像

②、配置参数文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF

kind: KubeletConfiguration

apiVersion: kubelet.config.k8s.io/v1beta1

address: 0.0.0.0

port: 10250

readOnlyPort: 10255

cgroupDriver: cgroupfs

clusterDNS:

- 10.0.0.2

clusterDomain: cluster.local

failSwapOn: false

authentication:

  anonymous:

    enabled: false

  webhook:

    cacheTTL: 2m0s

    enabled: true

  x509:

    clientCAFile: /opt/kubernetes/ssl/ca.pem

authorization:

  mode: Webhook

  webhook:

    cacheAuthorizedTTL: 5m0s

    cacheUnauthorizedTTL: 30s

evictionHard:

  imagefs.available: 15%

  memory.available: 100Mi

  nodefs.available: 10%

  nodefs.inodesFree: 5%

maxOpenFiles: 1000000

maxPods: 110

EOF

③、生成bootstrap.kubeconfig文件(下面命令在master节点的/opt/kubernetes/cfg目录下操作)

设置变量:

KUBE_APISERVER="https://192.168.10.136:6443"  #apiserver IP:PORT

TOKEN="c47ffb939f5ca36231d9e3121a252940"  #与token.csv里保持一致

生成kubelet bootstrap kubeconfig配置文件

kubectl config set-cluster kubernetes \

  --certificate-authority=/opt/kubernetes/ssl/ca.pem \

  --embed-certs=true \

  --server=${KUBE_APISERVER} \

  --kubeconfig=bootstrap.kubeconfig

kubectl config set-credentials "kubelet-bootstrap" \

  --token=${TOKEN} \

  --kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \

  --cluster=kubernetes \

  --user="kubelet-bootstrap" \

  --kubeconfig=bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

scp /opt/kubernetes/cfg/bootstrap.kubeconfig 192.168.10.137:/opt/kubernetes/cfg/

scp /opt/kubernetes/cfg/bootstrap.kubeconfig 192.168.10.138:/opt/kubernetes/cfg/

④、systemd管理kubelet

vi  /usr/lib/systemd/system/kubelet.service

[Unit]

Description=Kubernetes Kubelet

After=docker.service

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf

ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

⑤、启动并设置开机启动

systemctl daemon-reload && systemctl start kubelet && systemctl enable kubelet

ps -ef | grep kubelet

netstat -antp | grep 10250 

⑥、批准kubelet证书申请并加入集群(下面命令在master节点操作)

#查看kubelet证书请求

kubectl get csr(命令可以查看到哪些节点申请了证书请求)

#批准申请

kubectl certificate approve 后面加上上条命令返回的节点名称

#查看节点

kubectl get node(此时节点都是notready的状态,因为还没有部署cni网络插件)

3、部署kube-proxy

①、创建配置文件

vi  /opt/kubernetes/cfg/kube-proxy.conf

KUBE_PROXY_OPTS="--logtostderr=false \

--v=2 \

--log-dir=/opt/kubernetes/logs \

--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

②、配置参数文件

vi  /opt/kubernetes/cfg/kube-proxy-config.yml

kind: KubeProxyConfiguration

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: 0.0.0.0

metricsBindAddress: 0.0.0.0:10249

clientConnection:

  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig

hostnameOverride: k8s-master   #此处修改为自己的主机名

clusterCIDR: 10.0.0.0/24

③、生成kube-proxy.kubeconfig证书文件(下面命令在master节点操作)

#切换工作目录

cd /root/TLS/k8s

#创建证书请求文件

vi  kube-proxy-csr.json

{

  "CN": "system:kube-proxy",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "L": "BeiJing",

      "ST": "BeiJing",

      "O": "k8s",

      "OU": "System"

    }

  ]

}

#生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

ls kube-proxy*pem  看下数字证书kube-proxy-key.pem和kube-proxy.pem是否生成

#拷贝证书到node节点

scp kube-proxy-key.pem kube-proxy.pem 192.168.10.137:/opt/kubernetes/ssl

scp kube-proxy-key.pem kube-proxy.pem 192.168.10.138:/opt/kubernetes/ssl

④、生成kubeconfig文件:

KUBE_APISERVER="https://192.168.10.136:6443"

kubectl config set-cluster kubernetes \

  --certificate-authority=/opt/kubernetes/ssl/ca.pem \

  --embed-certs=true \

  --server=${KUBE_APISERVER} \

  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \

  --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \

  --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \

  --embed-certs=true \

  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \

  --cluster=kubernetes \

  --user=kube-proxy \

  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

⑤、systemd管理kube-proxy

vi  /usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=Kubernetes Proxy

After=network.target

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf

ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

⑤、启动并设置开机启动

systemctl daemon-reload && systemctl start kube-proxy && systemctl enable kube-proxy

ps -ef | grep kube-proxy

netstat -antp | grep 10249

4、部署cni网络插件(在node节点上操作)

①、先准备好CNI二进制文件(这个可以下载到)

下载地址:https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

②、解压二进制包并移动到默认工作目录

mkdir /opt/cni/bin -p

tar xf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

③、部署CNI网络(下面那个链接访问不到,我自行下载了yaml文件,默认镜像地址无法访问,修改为docker hub镜像仓库)

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml

kubectl apply -f kube-flannel.yml

kubectl get node(此时node显示为ready状态)

④、授权apiserver访问kubelet

vi  apiserver-to-kubelet-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  annotations:

    rbac.authorization.kubernetes.io/autoupdate: "true"

  labels:

    kubernetes.io/bootstrapping: rbac-defaults

  name: system:kube-apiserver-to-kubelet

rules:

  - apiGroups:

      - ""

    resources:

      - nodes/proxy

      - nodes/stats

      - nodes/log

      - nodes/spec

      - nodes/metrics

      - pods/log

    verbs:

      - "*"

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: system:kube-apiserver

  namespace: ""

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: system:kube-apiserver-to-kubelet

subjects:

  - apiGroup: rbac.authorization.k8s.io

    kind: User

    name: kubernetes

kubectl apply -f apiserver-to-kubelet-rbac.yaml

八、部署Dashboard和CoreDNS

①、部署Dashboard(下载不了,已自行下载)

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

②、授权访问dashboard

cat  dashboard-adminuser.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

  name: admin-user

  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: admin-user

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

- kind: ServiceAccount

  name: admin-user

  namespace: kubernetes-dashboard

kubectl apply -f dashboard-adminuser.yaml

③、获取可以访问dashboard页面的token(复制token即可)

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret |grep admin-user|awk '{print $1}')

④、访问地址:https://NodeIP:30001

⑤、部署CoreDNS,CoreDNS用于集群内部Service名称解析(自行下载yaml文件)

kubectl apply -f coredns.yaml

kubectl get pods -n kube-system

NAME                          READY   STATUS    RESTARTS   AGE

coredns-5ffbfd976d-j6shb      1/1     Running   0          32s

⑥、DNS解析测试

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh

/ # nslookup kubernetes

Server:    10.0.0.2

Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes

Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

可以看到解析没问题

九、新增加Node节点

1、拷贝已部署好的Node相关文件到新节点,在node1节点将涉及文件拷贝到新节点node2,

scp -r /opt/kubernetes [email protected]:/opt/

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system

scp -r /opt/cni/ [email protected]:/opt/

2、删除kubelet证书和kubeconfig文件(注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除重新生成)

rm /opt/kubernetes/cfg/kubelet.kubeconfig

rm -f /opt/kubernetes/ssl/kubelet*

3、修改主机名

vi /opt/kubernetes/cfg/kubelet.conf

--hostname-override=node2

vi /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: node2

4、启动并设置开机启动

systemctl daemon-reload

systemctl start kubelet

systemctl enable kubelet

systemctl start kube-proxy

systemctl enable kube-proxy

5、在Master上批准新Node kubelet证书申请

kubectl get csr

kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro

6、查看Node状态

kubectl get node

至此单master集群部署完成!!

十、高可用架构(扩容多Master架构,多部署一个master,两个nginx)

集群ip规划

192.168.10.136,master

192.168.10.137,node

192.168.10.138,node

192.168.10.139,master2

192.168.10.140,nginx-load

192.168.10.141,nginx-load

192.168.10.142,vip

1、安装Docker(master2操作)

同上,不再赘述。

2、部署master2(192.168.10.139)

master2与已部署的Master1所有操作一致。所以我们只需将Master1所有K8s文件拷贝过来,再修改下服务器IP和主机名启动服务即可。

3、创建etcd证书目录

在Master2创建etcd证书目录:

mkdir -p /opt/etcd/ssl

4、拷贝文件(Master1操作)

拷贝master1上所有K8s文件和etcd证书到master2

scp -r /opt/kubernetes [email protected]:/opt

scp -r /opt/cni/ [email protected]:/opt

scp -r /opt/etcd/ssl [email protected]:/opt/etcd

scp /usr/lib/systemd/system/kube* [email protected]:/usr/lib/systemd/system

scp /usr/bin/kubectl [email protected]:/usr/bin

5、删除证书文件

删除kubelet证书和kubeconfig文件

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig

rm -f /opt/kubernetes/ssl/kubelet*

6、修改配置文件IP和主机名

修改apiserver、kubelet和kube-proxy配置文件为本地IP

vi /opt/kubernetes/cfg/kube-apiserver.conf

--bind-address=192.168.10.139 \  #修改为本机IP

--advertise-address=192.168.10.139 \  #修改为本机IP

vi /opt/kubernetes/cfg/kubelet.conf

--hostname-override=master2  #修改为本机主机名

vi /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: master2  #修改为本机主机名

7、启动并设置开机启动

systemctl daemon-reload

systemctl start kube-apiserver

systemctl start kube-controller-manager

systemctl start kube-scheduler

systemctl start kubelet

systemctl start kube-proxy

systemctl enable kube-apiserver

systemctl enable kube-controller-manager

systemctl enable kube-scheduler

systemctl enable kubelet

systemctl enable kube-proxy

十一、部署Nginx负载均衡器

1、原理

Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。

Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,Keepalived主要根据Nginx运行状态判断是否需要故障转移(偏移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。

2、安装软件包(主/备都要操作)

yum install epel-release -y

yum install nginx keepalived -y

3、Nginx配置文件(主/备一样)

cat > /etc/nginx/nginx.conf << EOF

user nginx;

worker_processes auto;

error_log /var/log/nginx/error.log;

pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {

    worker_connections 1024;

}

stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {

       server 192.168.10.136:6443;   

       server 192.168.10.139:6443;

    }

    server {

       listen 6443;

       proxy_pass k8s-apiserver;

    }

}

http {

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '

                      '$status $body_bytes_sent "$http_referer" '

                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;

    tcp_nopush          on;

    tcp_nodelay         on;

    keepalive_timeout   65;

    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;

    default_type        application/octet-stream;

    server {

        listen       80 default_server;

        server_name  _;

        location / {

        }

    }

}

EOF

4、keepalived配置文件(nginx-master)

cat > /etc/keepalived/keepalived.conf << EOF

global_defs {

   notification_email {

     [email protected]

     [email protected]

     [email protected]

   }

   notification_email_from [email protected]  

   smtp_server 127.0.0.1

   smtp_connect_timeout 30

   router_id NGINX_MASTER  #这个位置不同

}

vrrp_script check_nginx {

    script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

    state MASTER  #这个位置不同

    interface ens32  #修改为实际网卡名

    virtual_router_id 10  #修改为ip地址第三位

    priority 100   

    advert_int 1   #指定VRRP心跳包通告间隔时间,默认1秒

    authentication {

        auth_type PASS      

        auth_pass 1111

    }  

    virtual_ipaddress {

        192.168.10.142/24

    }

    track_script {

        check_nginx

    }

}

EOF

5、检查nginx状态脚本(master上面做)

cat > /etc/keepalived/check_nginx.sh << EOF

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

    exit 1

else

    exit 0

fi

EOF

chmod +x /etc/keepalived/check_nginx.sh

6、keepalived配置文件(nginx-backup)

cat > /etc/keepalived/keepalived.conf << EOF

global_defs {

   notification_email {

     [email protected]

     [email protected]

     [email protected]

   }

   notification_email_from [email protected]  

   smtp_server 127.0.0.1

   smtp_connect_timeout 30

   router_id NGINX_BACKUP  #这个位置和master不同

}

vrrp_script check_nginx {

    script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

    state BACKUP  #这个位置和master不同

    interface ens32

    virtual_router_id 10

    priority 90

    advert_int 1

    authentication {

        auth_type PASS      

        auth_pass 1111

    }  

    virtual_ipaddress {

        192.168.10.142/24

    }

    track_script {

        check_nginx

    }

}

EOF

7、检查nginx状态脚本(backup上面做)

cat > /etc/keepalived/check_nginx.sh << EOF

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

    exit 1

else

    exit 0

fi

EOF

chmod +x /etc/keepalived/check_nginx.sh

注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移

8、启动并设置开机启动

systemctl daemon-reload

systemctl start nginx

systemctl start keepalived

systemctl enable nginx

systemctl enable keepalived

9、查看keepalived工作状态

ip a  查看是否能在ens32网卡信息里看到vip的地址

10、Nginx+Keepalived高可用测试

关闭主节点nginx,测试VIP是否漂移到备节点服务器。

在nginx master执行pkill nginx

在nginx backup,ip a命令查看已成功绑定VIP

11、访问负载均衡器测试

找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问

curl -k https://192.168.10.142:6443/version

{

  "major": "1",

  "minor": "18",

  "gitVersion": "v1.18.2",

  "gitCommit": "52c56ce7a8272c798dbc29846288d7cd9fbae032",

  "gitTreeState": "clean",

  "buildDate": "2020-04-16T11:48:36Z",

  "goVersion": "go1.13.9",

  "compiler": "gc",

  "platform": "linux/amd64"

}

可以正确获取到K8s版本信息,说明负载均衡器搭建正常

12、通过查看nginx日志也可以看到转发apiserver ip

tail /var/log/nginx/k8s-access.log -f

13、修改所有node节点连接LB的VIP(在所有node节点上执行)

sed -i 's#192.168.10.136:6443#192.168.10.142:6443#' /opt/kubernetes/cfg/*

systemctl restart kubelet

systemctl restart kube-proxy

14、检查节点状态

kubectl get node  所有节点状态正常

一套完整的Kubernetes高可用集群就部署完成了!

你可能感兴趣的:(二进制部署k8s)