Kubenertes部署(3master+3node)高可用集群
Etce采用混部方式
Keepalived 实现vip高可用
Haproyxy:以系统systemd形式运行,提供反向代理至3个master 6443端口;
其他的主要部署组件包括:
Metrics 度量
Dashboard Kubernetes 图形化界面
Helm:kubenetes包管理工具;
Ingress kubenertes 服务暴露
Longhorn :kubernetes 动态存储组件
部署规划
节点规划
节点主机名 Ip 类型 运行服务
Master01 192.168.58.21 K8s master 01 Docker 、etcd、kube-apiserver、kube-schduler、kube-controller-manager
、kubectl、kubelet、metrics、calico、haproxy
Master02 192.168.58.22 K8s master02 Docker、etcd、kube-apiserver、kube-schduler、kube-contrller-manager、kubectl、kubelet、metrics、calico、haproxy
Master03 192.168.58.23 K8s master 03 Docker、etcd、kube-apiserver、kube-schduler、kube-controller-manager、kubelet、metrics、calico、haproxy
Work01 192.168.58.24 K8s node 01 Docker、kubelet、proxy、calico
work02 192.168.58.25 K8s node 02 Docker、kubelet、proxy、calico
work03 192.168.58.26 K8s node 03 Docker、kubelet、proxy、calico
Vip 192.168.58.200
Kubernetes的高可用主要指的是控制平面的高可用,即值多套Master节点和Etcd组件,工作节点通过负载均衡连接到各Master
Kubernetes高可用架构中etcd与Master节点组件混布方式特点
- 所需机器资源少
- 部署简单,利于管理
- 容易进行横向扩展
- 风险大,一台主机挂了,master和etcd就都少了一套,集群冗余度受到的影响比较大
手动添加解析
[root@localhost ~]# vim /etc/hosts
192.168.58.11 master01
192.168.58.12 master02
192.168.58.13 master03
192.168.58.14 node01
192.168.58.15 node02
192.168.58.16 node03
[root@master01 ~]# vim k8sinit.sh
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net.ipv6.conf.all.disable_ipv6 = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf >&/dev/null
swapoff -a
sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab
modprobe br_netfilter
Add ipvs modules
cat > /etc/sysconfig/modules/ipvs.modules < modprobe -- ip_vs chmod 755 /etc/sysconfig/modules/ipvs.modules yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel rpm --import http://down.linuxsb.com/RPM-G... echo 'export PATH=/opt/k8s/bin:$PATH' >> /root/.bashrc reboot export MASTER_IPS=(192.168.58.21 192.168.58.22 192.168.58.23) export MASTER_NAMES=(master01 master02 master03) export NODE_IPS=(192.168.58.14 192.168.58.15 192.168.58.16) export NODE_NAMES=(node01 node02 node03) export ALL_IPS=(192.168.58.21 192.168.58.22 192.168.58.23 192.168.58.24 192.168.58.25 192.168.58.26 ) export ALL_NAMES=(master01 master02 master03 node01 node02 node03) export K8SHA_VIP=192.168.58.200 export K8SHA_IP1=192.168.58.21 export K8SHA_IP2=192.168.58.22 export K8SHA_IP3=192.168.58 export K8SHA_HOST1=master01 export K8SHA_HOST2=master02 export K8SHA_HOST3=master03 export K8SHA_NETINF1=ens33 export K8SHA_NETINF2=ens33 export K8SHA_NETINF3=ens33 export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d export K8SHA_PODCIDR=10.10.0.0 export K8SHA_SVCCIDR=10.20.0.0 mkdir -p config/$K8SHA_HOST1/{keepalived,haproxy} wget -c -P config/keepalived/ http://down.linuxsb.com/keepa... sed \ echo "create kubeadm-config.yaml files success. kubeadm-config.yaml" chmod u+x config/keepalived/check_apiserver.sh sed \ sed \ sed \ echo "create keepalived files success. config/$K8SHA_HOST1/keepalived/" sed \ sed \ echo "create haproxy files success. config/$K8SHA_HOST1/haproxy/" scp -rp config/haproxy/haproxy.conf root@$K8SHA_HOST1:/etc/haproxy/haproxy.cfg scp -rp config/$K8SHA_HOST1/keepalived/* root@$K8SHA_HOST1:/etc/keepalived/ chmod u+x config/*.sh [root@master01 ~]# ./hakek8s.sh apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: "/etc/kubernetes/pki" apiVersion: kubeproxy.config.k8s.io/v1alpha1 [root@master01 ~]# for all_ip in ${ALL_IPS[@]} 解释:如上仅需在Master上执行,执行hakek8s.sh脚本后会产生如下配置文件清单: KUBE_VERSION=v1.18.3 kubeimages=(kube-proxy:${KUBE_VERSION} for kubeimageName in ${kubeimages[@]} ; do calimages=(cni:${CALICO_VERSION} for calimageName in ${calimages[@]} ; do ingressimages=(nginx-ingress-controller:${INGRESS_VERSION}) for ingressimageName in ${ingressimages[@]} ; do csiimages=(csi-provisioner:${CSI_PROVISIONER_VERSION} for csiimageName in ${csiimages[@]} ; do Master 上初始化 依次加入其它的节点 加入环境变量 name: CALICO_IPV4POOL_CIDR name: IP_AUTODETECTION_METHOD [root@master01 ~]# kubectl apply -f config/calico/calico.yaml 修改node端口范围 Metrics部署 kind: Deployment spec: template: imagePullPolicy: IfNotPresent 正式部署 查看资源监控 Dashboard部署 name: kubernetes-dashboard 因为自动生成的证书很多浏览器无法使用 创建证书 创建dashboard管理员 apiVersion: v1 name: dashboard-admin apiVersion: rbac.authorization.k8s.io/v1 roleRef: 保存后执行 登录Dashboard https://192.168.58.200:30003/ 选择token 复制刚才的生成token 。注意 ip为任意node节点的对外ip 安装kuboard 获取token (拥有的权限)此token拥有的ClusterAdmin的权限,可以执行所有的权限 访问Kuboard!/bin/bash
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
modprobe -- nf_conntrack
EOF
bash /etc/sysconfig/modules/ipvs.modulesInstall rpm
openssl-develUpdate kernel
rpm -Uvh http://down.linuxsb.com/elrep...
yum --disablerepo="*" --enablerepo="elrepo-kernel" install -y kernel-ml
sed -i 's/^GRUB_DEFAULT=.*/GRUB_DEFAULT=0/' /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg
yum update -yADD k8s bin to PATH
Reboot the machine.
修改hostname
for host in $(cat /etc/hosts|grep ip a|grep global|awk '{print $2}'|cut -d / -f 1
|awk '{print $2}'); do hostnamectl set-
hostname $host ;done
免秘钥登录
for host in master0{1..3} work0{1..3};do ssh-copy-id -i ~/.
ssh/id_rsa.pub root@${host};done
环境其他准备
[root@master01 ~]# vim environment.sh集群 MASTER 机器 IP 数组
集群 MASTER IP 对应的主机名数组
集群 NODE 机器 IP 数组
集群 NODE IP 对应的主机名数组
集群所有机器 IP 数组
集群所有IP 对应的主机名数组
[root@master01 ~]# source environment.sh
[root@master01 ~]# chmod +x *.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
scp -rp /etc/hosts root@${all_ip}:/etc/hosts
scp -rp k8sinit.sh root@${all_ip}:/root/
ssh root@${all_ip} "bash /root/k8sinit.sh"
done
集群部署-相关组件包
需要在每台机器上都安装一下的的软件包
Kubadm:用来初始化集群的指令;
Kubelet:在集群中的每个节点上用来启动pod和container等;
Kubectl:用来与集群通信命令工具;
Kubeadm不能安装或管理kubelet或kubectl,所以的保证他们满足通过kubadm安装的kubernetes控制层对版本的要求。如果没有满足要求,可能会导致一些意外错误或者问题。具体相关组件安装 见《附件001.kubectl介绍及使用》
正式安装
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
ssh root@${all_ip} " yum list |grep kubelet"
done
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
ssh root@${all_ip} "yum install -y kubeadm-1.18.3-0.x86_64 kubelet-1.18.3-0.x86_64 kubectl-1.18.3-0.x86_64 --disableexcludes=kubernetes"
ssh root@${all_ip} "systemctl enable kubelet"
done
高可用组件的安装
Haproxy安装
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "yum -y install gcc gcc-c++ make libnl libnl-devel
libnfnetlink-devel openssl-devel wget openssh-clients systemd-devel zlib-devel pcre-devel
libnl3-devel"
ssh root@${master_ip} "wget
http://down.linuxsb.com/softw...z"
ssh root@${master_ip} "tar -zxvf haproxy-2.1.6.tar.gz"
ssh root@${master_ip} "cd haproxy-2.1.6/ && make ARCH=x86_64 TARGET=linux-glibc
USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 PREFIX=/usr/local/haprpxy && make install PREFIX=/usr/local/haproxy"
ssh root@${master_ip} "cp /usr/local/haproxy/sbin/haproxy /usr/sbin/"
ssh root@${master_ip} "useradd -r haproxy && usermod -G haproxy haproxy"
ssh root@${master_ip} "mkdir -p /etc/haproxy && cp -r /root/haproxy-2.1.6/examples/errorfiles/ /usr/local/haproxy/"
done
Keepalived安装
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "yum -y install gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel openssl-devel"
ssh root@${master_ip} "wget
http://down.linuxsb.com/softw...z"
ssh root@${master_ip} "tar -zxvf keepalived-2.0.20.tar.gz"
ssh root@${master_ip} "cd keepalived-2.0.20/ && ./configure --sysconf=/etc --prefix=/usr/local/keepalived && make && make install"
done
创建配置文件
[root@master01 ~]# wget
http://down.linuxsb.com/hakek...
[root@master01 ~]# chmod u+x hakek8s.sh
[root@master01 ~]# vim hakek8s.sh
!/bin/sh
ScriptName: hakek8s.sh
Author: xhy
Create Date: 2020-06-08 20:00
Modify Author: xhy
Modify Date: 2020-06-10 18:57
Version: v2
*
set variables below to create the config files, all files will create at ./config directory
master keepalived virtual ip address
master01 ip address
master02 ip address
master03 ip address
master01 hostname
master02 hostname
master03 hostname
master01 network interface name
master02 network interface name
master03 network interface name
keepalived auth_pass config
kubernetes CIDR pod subnet
kubernetes CIDR svc subnet
please do not modify anything below
mkdir -p config/$K8SHA_HOST2/{keepalived,haproxy}
mkdir -p config/$K8SHA_HOST3/{keepalived,haproxy}
mkdir -p config/keepalived
mkdir -p config/haproxy
mkdir -p config/calicowget all files
wget -c -P config/keepalived/ http://down.linuxsb.com/keepa...
wget -c -P config/haproxy/ http://down.linuxsb.com/hapro...
wget -c -P config/haproxy/ http://down.linuxsb.com/hapro...
wget -c -P config/calico/ http://down.linuxsb.com/calic...
wget -c -P config/ http://down.linuxsb.com/kubea...
wget -c -P config/ http://down.linuxsb.com/kubea...
wget -c -P config/ http://down.linuxsb.com/kubea...create all kubeadm-config.yaml files
-e "s/K8SHA_HOST1/${K8SHA_HOST1}/g" \
-e "s/K8SHA_HOST2/${K8SHA_HOST2}/g" \
-e "s/K8SHA_HOST3/${K8SHA_HOST3}/g" \
-e "s/K8SHA_IP1/${K8SHA_IP1}/g" \
-e "s/K8SHA_IP2/${K8SHA_IP2}/g" \
-e "s/K8SHA_IP3/${K8SHA_IP3}/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_PODCIDR/${K8SHA_PODCIDR}/g" \
-e "s/K8SHA_SVCCIDR/${K8SHA_SVCCIDR}/g" \
config/kubeadm-config.yaml.tpl > kubeadm-config.yamlcreate all keepalived files
cp config/keepalived/check_apiserver.sh config/$K8SHA_HOST1/keepalived
cp config/keepalived/check_apiserver.sh config/$K8SHA_HOST2/keepalived
cp config/keepalived/check_apiserver.sh config/$K8SHA_HOST3/keepalived
-e "s/K8SHA_KA_STATE/BACKUP/g" \
-e "s/K8SHA_KA_INTF/${K8SHA_NETINF1}/g" \
-e "s/K8SHA_IPLOCAL/${K8SHA_IP1}/g" \
-e "s/K8SHA_KA_PRIO/102/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \
config/keepalived/k8s-keepalived.conf.tpl > config/$K8SHA_HOST1/keepalived/keepalived.conf
-e "s/K8SHA_KA_STATE/BACKUP/g" \
-e "s/K8SHA_KA_INTF/${K8SHA_NETINF2}/g" \
-e "s/K8SHA_IPLOCAL/${K8SHA_IP2}/g" \
-e "s/K8SHA_KA_PRIO/101/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \
config/keepalived/k8s-keepalived.conf.tpl > config/$K8SHA_HOST2/keepalived/keepalived.conf
-e "s/K8SHA_KA_STATE/BACKUP/g" \
-e "s/K8SHA_KA_INTF/${K8SHA_NETINF3}/g" \
-e "s/K8SHA_IPLOCAL/${K8SHA_IP3}/g" \
-e "s/K8SHA_KA_PRIO/100/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \
config/keepalived/k8s-keepalived.conf.tpl > config/$K8SHA_HOST3/keepalived/keepalived.conf
echo "create keepalived files success. config/$K8SHA_HOST2/keepalived/"
echo "create keepalived files success. config/$K8SHA_HOST3/keepalived/"create all haproxy files
-e "s/K8SHA_IP1/$K8SHA_IP1/g" \
-e "s/K8SHA_IP2/$K8SHA_IP2/g" \
-e "s/K8SHA_IP3/$K8SHA_IP3/g" \
-e "s/K8SHA_HOST1/$K8SHA_HOST1/g" \
-e "s/K8SHA_HOST2/$K8SHA_HOST2/g" \
-e "s/K8SHA_HOST3/$K8SHA_HOST3/g" \
config/haproxy/k8s-haproxy.cfg.tpl > config/haproxy/haproxy.confcreate calico yaml file
-e "s/K8SHA_PODCIDR/${K8SHA_PODCIDR}/g" \
config/calico/calico.yaml.tpl > config/calico/calico.yaml
echo "create haproxy files success. config/$K8SHA_HOST2/haproxy/"
echo "create haproxy files success. config/$K8SHA_HOST3/haproxy/"scp all file
scp -rp config/haproxy/haproxy.conf root@$K8SHA_HOST2:/etc/haproxy/haproxy.cfg
scp -rp config/haproxy/haproxy.conf root@$K8SHA_HOST3:/etc/haproxy/haproxy.cfg
scp -rp config/haproxy/k8s-haproxy.service root@$K8SHA_HOST1:/usr/lib/systemd/system/haproxy.service
scp -rp config/haproxy/k8s-haproxy.service root@$K8SHA_HOST2:/usr/lib/systemd/system/haproxy.service
scp -rp config/haproxy/k8s-haproxy.service root@$K8SHA_HOST3:/usr/lib/systemd/system/haproxy.service
scp -rp config/$K8SHA_HOST2/keepalived/* root@$K8SHA_HOST2:/etc/keepalived/
scp -rp config/$K8SHA_HOST3/keepalived/* root@$K8SHA_HOST3:/etc/keepalived/chmod *.sh
解释:如上仅需在Master上执行,执行hakek8s.sh脚本后会产生如下配置文件清单:
Kubeadm-config.yaml:kubeadm初始化配置文件,位于当前目录
Keepalived:keepalived配置文件,位于各个master节点的/etc/keepalived目录
Haproxy:haproxy的配置文件,位于各个master节点的/etc/haproxy/目录
Calico.yaml:calico网络组建部署文件,位于config/calico目录[root@master01 ~]# cat kubeadm-config.yaml
kind: ClusterConfiguration
networking:
serviceSubnet: "10.20.0.0/16"
podSubnet: "10.10.0.0/16"
dnsDomain: "cluster.local"
kubernetesVersion: "v1.19.4”
controlPlaneEndpoint: "192.168.2.200:16443"
apiServer:
certSANs:
timeoutForControlPlane: 4m0s
imageRepository: "k8s.gcr.io"
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
提示:如上仅需要在maste01节点上操作,更多config文件参考
https://godoc.org/k8s.io/kube...
kubeadm部署初始化配置更多参考:
https://pkg.go.dev/k8s.io/kub...
启动服务
[root@master01 ~]# cat /etc/keepalived/keepalived.conf
[root@master01 ~]# cat /etc/keepalived/check_apiserver.sh 确认keepalive配置
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "systemctl start haproxy.service && systemctl enable haproxy.service"
ssh root@${master_ip} "systemctl start keepalived.service && systemctl enable keepalived.service"
ssh root@${master_ip} "systemctl status keepalived.service | grep Active"
ssh root@${master_ip} "systemctl status haproxy.service | grep Active"
done
do
echo ">>> ${all_ip}"
ssh root@${all_ip} "ping -c1 192.168.58.200"
done
提示:如上的操作仅需要在master01上执行,从而实现所有节点启动服务
解释: 如上仅需要master01节点上操作。执行hakek8s.sh脚本会产生如下的配置文件清单
Kubeadm-config.yaml:kubeadm初始化配置文件,位于当前目录
Keepalived:keepalived配置文件,位于各个master节点的/etc/keepalived目录
Haproxy:haproxy的配置文件,位于各个master节点的/etc/haproxy/目录
Calico.yaml:calico网络组建部署文件,位于config/calico目录
初始化集群
拉取镜像
kubeadm --kubernetes-version=v1.18.3onfig images list 列出所需的镜像
[root@master01 ~]# cat config/downimage.sh 确认版本信息!/bin/sh
ScriptName: downimage.sh
Author: xhy
Create Date: 2020-05-29 19:55
Modify Author: xhy
Modify Date: 2020-06-10 19:15
Version: v2
*
CALICO_VERSION=v3.14.1
CALICO_URL=calico
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.3-0
CORE_DNS_VERSION=1.6.7
GCR_URL=k8s.gcr.io
METRICS_SERVER_VERSION=v0.3.6
INGRESS_VERSION=0.32.0
CSI_PROVISIONER_VERSION=v1.4.0
CSI_NODE_DRIVER_VERSION=v1.2.0
CSI_ATTACHER_VERSION=v2.0.0
CSI_RESIZER_VERSION=v0.3.0
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
UCLOUD_URL=uhub.service.ucloud.cn/uxhy
QUAY_URL=quay.io
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION}
metrics-server-amd64:${METRICS_SERVER_VERSION}
)
docker pull $UCLOUD_URL/$kubeimageName
docker tag $UCLOUD_URL/$kubeimageName $GCR_URL/$kubeimageName
docker rmi $UCLOUD_URL/$kubeimageName
done
pod2daemon-flexvol:${CALICO_VERSION}
node:${CALICO_VERSION}
kube-controllers:${CALICO_VERSION})
docker pull $UCLOUD_URL/$calimageName
docker tag $UCLOUD_URL/$calimageName $CALICO_URL/$calimageName
docker rmi $UCLOUD_URL/$calimageName
done
docker pull $UCLOUD_URL/$ingressimageName
docker tag $UCLOUD_URL/$ingressimageName $QUAY_URL/kubernetes-ingress-controller/$ingressimageName
docker rmi $UCLOUD_URL/$ingressimageName
done
csi-node-driver-registrar:${CSI_NODE_DRIVER_VERSION}
csi-attacher:${CSI_ATTACHER_VERSION}
csi-resizer:${CSI_RESIZER_VERSION}
)
docker pull $UCLOUD_URL/$csiimageName
docker tag $UCLOUD_URL/$csiimageName $QUAY_URL/k8scsi/$csiimageName
docker rmi $UCLOUD_URL/$csiimageName
done
提示 :如上仅需要子啊Master01上操作,从而实现所有节点自动拉取
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
scp -p config/downimage.sh root@${all_ip}:/root/
ssh root@${all_ip} "bash downimage.sh &"
done
kubeadm init --config=kubeadm-config.yaml --upload-certs
执行上图中的标红色的命令(分别对应master和node)
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master01 ~]# cat << EOF >> ~/.bashrc
export KUBECONFIG=$HOME/.kube/config
EOF
[root@localhost ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@localhost ~]# source ~/.bashrc
安装NIC插件
Calico是一个安全的L3的网络和网络策略提供者。
Canal结合Flannel和Calico,提供网络和网络策略。
Cilium是一个L3网络和网络策略插件,能够透明实施HTTP/APL/L7策略。同时支持路由(routing)和叠加/封装(overlay/encapsulation)模式
Contiv为多种用例提供可配置网络(使用BGP的原生L3,使用vxlan的overlay,经典L2和Cisco-SDN/ACI)和丰富的cel框架。Contiv项目完全开源。安装工具同时提供基于和不基于kubeadm的安装选项、
Flannel是一个pod网络的层3解决方案,并且支持NetworkPolcy API。Kubeadm add-on安装细节可以在这里找到。
Weave Net提供了网络分组两端参与工作的网络和网络策略,并且不需要额外的数据库。
CNI-Genie使kubernetes无缝连接到一种CNI插件,例如:Flannel、Calico、Canal、
Romana或者Weave。提示本方案使用Calico插件。
设置标签
[root@localhost ~]# kubectl taint nodes --all node-role.kubernetes.io/master- #允许master部署应用
部署calico
[root@master01 ~]# vim config/calico/calico.yaml
value: "10.10.0.0/16" 检查pod网段
value: "interface=ens.*" 检查节点之间的网卡
[root@master01 ~]# kubectl get pods --all-namespaces -o wide 查看部署
[root@master01 ~]# kubectl get nodes
[root@master01 ~]# vi /etc/kubernetes/manifests/kube-apiserver.yaml
以上全在master上执行
清理集群部分命令
[root@node01 ~]# kubeadm reset
[root@node01 ~]# ifconfig cni0 down
[root@node01 ~]# ip link delete cni0
[root@node01 ~]# ifconfig flannel.1 down
[root@node01 ~]# ip link delete flannel.1
[root@node01 ~]# rm ‐rf /var/lib/cni/
确认验证
[root@master01 ~]# kubectl get nodes 节点状态
[root@master01 ~]# kubectl get cs 组件信息
[root@master01 ~]# kubectl get serviceaccount 服务账户
[root@master01 ~]# kubectl cluster-info 集群信息
[root@master01 ~]# kubectl get pods -n kube-system -o wide 所有服务状态
Metrics介绍
Kubernetes的早期版本依靠heapster来实现完整的性能数据采集和监控功能,kubernetes从1.8版本开始,性能开始以MetricsAPI的方式提供标准化接口,并且从1.10版本开始将Heapster替换为Metrics Server。在kubernetes新的监控体系中,Metrics Server用于提供核心指标(CoreMetrics),包括Node、Pod的CPU和内存使用指标。对其他自定义指标(CustomMetrics)的监控有Promethues等组件来完成。
开启聚合层
有关聚合层知识参考:https://blog.csdn.net/liukuan... kubeadm方式部署默认已开启。
获取部署文件
[root@master01 ~]# mkdir metrics
[root@master01 ~]# cd metrics/
[root@master01 metrics]# wget https://github.com/kubernetes...
metadata:
name: metrics-server
namespace: kube-system
labels:k8s-app: metrics-server
replicas: 3 根据集群添加规模
selector:matchLabels:
k8s-app: metrics-server
metadata: 在template 下spec 添加 Network=true
name: metrics-server
labels:
k8s-app: metrics-server
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
---kubelet-preferred-addresstypes=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP 添加此args
[root@master01 metrics]# kubectl apply -f components.yaml
[root@master01 metrics]# kubectl -n kube-system get pods -l k8s-app=metrics-server
[root@master01 ~]# kubectl top nodes
[root@master01 ~]# kubectl top pods --all-namespaces
[root@master01 ~]# wget https://raw.githubusercontent...
修改 recommended.yamlk8s-app: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort 添加
ports:- port: 443
targetPort: 8443
nodePort: 30003 添加
---
所以我们自己创建 注释kuberntetes-dashboard-certs对象声明apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
[root@master01 ~]# mkdir dashboard-certs
[root@master01 ~]# cd dashboard-certs/
创建命名空间
[root@master01 dashboard-certs]# kubectl create namespace kubernetes-dashboard
创建key文件
[root@master01 dashboard-certs]# openssl genrsa -out dashboard.key 2048
证书请求
[root@master01 dashboard-certs]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
自签证书
[root@master01 dashboard-certs]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
创建kubernetes-dashboard-certs对象
[root@master01 dashboard-certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboar
d.crt -n kubernetes-dashboard
安装dashboard
安装
[root@master01 ~]# kubectl create -f recommended.yaml
检查结果
[root@master01 ~]# kubectl get service -n kubernetes-dashboard -o wide
新建一个yaml文件
[root@master01 ~]# vim dashboard-admin.yaml
kind: ServiceAccount
metadata:
labels:k8s-app: kubernetes-dashboard
namespace: kubernetes-dashboard
为用户分配权限:
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-bind-cluster-role
labels:k8s-app: kubernetes-dashboard
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
name: dashboard-admin
namespace: kubernetes-dashboard
[root@master01 ~]# kubectl apply -f dashboard-admin.yaml
查看并复制用户Token
[root@master01 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{p
rint $1}')
https://kuboard.cn/install/in...
[root@work04 ~]# kubectl apply -f https://kuboard.cn/install-sc...
[root@work04 ~]# kubectl apply -f https://addons.kuboard.cn/met...
查看运行状态
kubectl get pods -l k8s.kuboard.cn/name=kuboard -n kube-system
echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' |
base64 -d)
你可以通过NodePort访问Kuboard
Kuboard Servvice使用了 NodePort的方式暴露服务,NodePort为32567,你可以按如下的方式访问Kuboard
http://任意一个work节点的ip:32567/