k8s学习(二):k8s高可用集群搭建

k8s的搭建难度很大,这里的教程是结合是视频,博客,官网各种折腾,耗时一个五一假期折腾出来的,遇到问题不要灰心,因为k8s搭建真的很麻烦中间会遇到非常多的坑,耐心的解决个个问题肯定会成功的

参考博客:https://www.cnblogs.com/ssgeek/p/11942062.html

  • 高可用方案:使用的keepalived+haproxy或者keepalived+Nginx方案,我选择keepalived+haproxy,使用keepalived监控master节点的可用性和故障转移,使用haproxy对master进行均衡负载。其实应该把haproxy作为几个单独的节点去搭建主要起到master节点的负载均衡,keepalived去监控haproxy是否可用和故障转移,我这边节点太多机器有点吃不消,就搭建简单点直接把haproxy运行在master上

硬件环境

搭建三个master节点和两个worker节点还有一个虚拟ip,分别是192.168.200.128(master),192.168.200.129(master),192.168.200.130(master),192.168.200.131(worker),192.168.200.132(worker),192.168.200.16(vip)


每个虚拟节点cup至少双核,内存至少2G

环境配置(all)

  • 配置hosts
vim /etc/hosts
  • 配置各个虚拟机的hostname,确保集群各个虚拟机的hostname不一样
hostnamectl set-hostname master128
  • 环境配置
    关闭防火墙,关闭swap
yum update
systemctl stop firewalld && systemctl disable firewalld //关闭防火墙
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp //下载依赖包
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab 
setenforce 0
service dnsmasq stop && systemctl disable dnsmasq

开启路由转发

cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF


modprobe br_netfilter
sysctl -p /etc/sysctl.d/kubernetes.conf

安装下载docker(all)

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install -y yum-utils device-mapper-persistent-data lvm2 //安装依赖包
yum-config-manager --add-repo  https://download.docker.com/linux/centos/docker-ce.repo //设置从stable仓库获取docker
yum install docker-ce docker-ce-cli containerd.io -y //安装Docker
yum list docker-ce --showduplicates | sort -r //执行以上命令之前,可以看看docker版本,执行以下命令查看
systemctl start docker && systemctl enable docker //启动Docker(并设置为开机启动)

添加镜像加速器,创建docker-data到磁盘最大的盘下,使用df -hl查看磁盘挂载,看看最大的硬盘挂载到哪里了

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://opmd7r0m.mirror.aliyuncs.com"],
   "exec-opts":["native.cgroupdriver=systemd"],
   "graph":"/docker-data"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

下载配置keepalived(all master)

在master节点128和master节点129安装配置keepalived,128搭建keepalived的master节点,129和130搭建keepalived的backup,虚拟ip是192.168.200.16,keepalived能实现服务器的故障自动切换并且向外界提供统一的ip称之为虚拟ip,开始是由master提供服务,backu定期检查master的节点是否可用,如果发现master故障backup就会顶替master提供服务

yum install -y keepalived

配置keepalived.conf

vim /etc/keepalived/keepalived.conf

配置如下:

//这里是master128的master配置
global_defs {
   router_id keepalive-master
}

vrrp_srcipt check_apiserver {
        script "/etc/keepalived/chack-apiserver.sh"//监控虚拟ip(vip)是否可用,不可用权重值就减2
        interval 3 //每隔三秒执行一次chack-apiserver.sh脚本
        weight -2 //权重减2
}

vrrp_instance VI-kube-master {
    state MASTER //master节点
    interface ens33 //网卡接口,可用通过ip addr查看
    virtual_router_id 51 //让master 和backup在同一个虚拟路由里,id 号必须相同;
    priority 250 //优先级,谁的优先级高谁就是master
    dont_track_primary //脚本执行错误与否不影响keepalived的正常运行
    advert_int 3 //心跳间隔时间
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.200.16 //虚拟ip
    }
        track_script {
                check_apiserver
        }
}

//这里是master129的backup配置
global_defs {
   router_id keepalive-backup1
}

vrrp_srcipt check_apiserver {
        script "/etc/keepalived/chack-apiserver.sh"
        interval 3
        weight -2
}

vrrp_instance VI-kube-backup {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 200
    dont_track_primary
    advert_int 3
    virtual_ipaddress {
        192.168.200.16
    }
        track_script {
                check_apiserver
        }
}

//这里是master130的backup配置
global_defs {
   router_id keepalive-backup2
}

vrrp_srcipt check_apiserver {
        script "/etc/keepalived/chack-apiserver.sh"
        interval 3
        weight -2
}

vrrp_instance VI-kube-backup {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 150
    dont_track_primary
    advert_int 3
    virtual_ipaddress {
        192.168.200.16
    }
        track_script {
                check_apiserver
        }
}

下面是chack-apiserver.sh,主要用于检测虚拟ip是否可用

#!/bin/sh
errorExit(){
 eho "*** $*" 1>&2
 exit 1 
}
curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error Get https://localhost:6443/"
if ip addr | grep -q 192.168.200.16; then
   curl --silent --max-time 2 --insecure https://192.168.200.16:6443/ -o /dev/null || errorExit "Error Get https://192.168.200.16:6443/"
fi

启动keepalived

systemctl enable keepalived && service keepalived start //配置开机启动并启动keepalived

查看keepalived的状态查看ip,这里如果master正常运行的话backup是查看不到虚拟ip的,只有master才看得到,keepalived保证只有一个节点使用虚拟ip提供服务

service keepalived status
ip a

下载配置haproxy(all master)

下载haproxy

yum install -y haproxy

配置haproxy.cfg信息

global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend  main *:16443
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    use_backend static          if url_static
    default_backend             app

backend static
    balance     roundrobin
    server      static 127.0.0.1:4331 check

backend app
    balance     roundrobin
    server  master128 192.168.200.128:6443 check
    server  master129 192.168.200.129:6443 check
    server  master130 192.168.200.130:6443 check

开机启动haproxy

systemctl enable haproxy &&  systemctl start haproxy

下载安装配置k8s(all)

  • 下载安装k8s
    配置阿里巴巴的yum源
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

下载k8s,注意这里1.20.0版本后就不推荐docker

yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3

开机启动k8s并启动k8s

systemctl enable kubelet && systemctl start kubelet

命令补齐

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

初始化 kubeadm(master128)

kubeadm配置

安装 kubernetes 主要是安装它的各个镜像,而 kubeadm 已经为我们集成好了运行 kubernetes 所需的基本镜像。但由于国内的网络原因,在搭建环境时,无法拉取到这些镜像。此时我们只需要修改为阿里云提供的镜像服务即可解决该问题。

下面是在master128节点的操作
先导出默认的配置作为参考

kubeadm config print init-defaults

我的配置文件叫做kubeadm-conf.yaml,我根据默认配置修改得到的下面的配置

apiServer:
  certSANs:
  - vip16
  - master128
  - master129
  - master130
  - 192.168.200.16
  - 192.168.200.128
  - 192.168.200.129
  - 192.168.200.130
  - 127.0.0.1
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.200.16:16443"
controllerManager: {}
dns: 
  type: CoreDNS
etcd:
  local:    
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #改用阿里服务器
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking: 
  dnsDomain: cluster.local  
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.1.0.0/16
scheduler: {}

先测试一下文件是否可用

kubeadm init --config ~/kubeadm-conf.yaml --dry-run

执行文件初始化

kubeadm init --config ~/kubeadm-conf.yaml

如果文件中间发送错误可用使用一下命令重置kubeadm

kubeadm reset 

这边由于墙的问题下载coredns包出问题,可以使用docker拉取镜像

如果能下载正常的话忽略下面这两个命令

docker pull coredns/coredns:1.6.2  //拉取镜像
docker tag coredns/coredns:1.6.2 registry.aliyuncs.com/google_containers/coredns:1.6.2 //修改镜像标签

下面第一个命令是用于加入其他master节点的,第二条命令是用于加入worker节点的



按照后台输出的提示执行下面的命令

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

测试

kubectl get node
kubectl get pods --all-namespaces

安装集群网络(all)

由于下载地址的域名被墙了,将下面的配置加入/etc/hosts

199.232.28.133  raw.githubusercontent.com

下载kube-flannel.yml

curl  -o  kube-flannel.yml  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

由于国内的网络问题镜像可能会拉不下来,在执行配置文件前,我们先用docker拉取镜像

docker pull quay.io/coreos/flannel:v0.11.0-amd64

在master节点执行以下命令:

kubectl apply -f kube-flannel.yml

检查

 kubectl get pods -n kube-system
kubectl get node

加入其他master节点(master129,master130)

集群其他master节点互相通信就得把各个master128生产的秘钥和证书进行拷贝

scp -r [email protected]:/etc/kubernetes/pki .  
scp -r [email protected]:/etc/kubernetes/admin.conf .

然后到master129和master130的操作客户端删除刚刚发送过的文件中一些不需要的文件

cd /etc/kubernetes/pki
rm -rf apiserver* front-proxy-client.* 
cd /etc/kubernetes/pki/etcd/
rm -rf healthcheck-client.* peer.* server.*

将处理完的文件分发到另外两个master节点去

加入master129和master130,这些是刚刚拷贝过来的文件
现在需要使用master128初始化时生成的第一个命令
kubeadm join 192.168.200.16:6443 --token tex1lz.58kdm6alx556wjmq \
        --discovery-token-ca-cert-hash sha256:ada43a6f57d29cdbc9915054975a1af961dae2bb5408509752d79463bc10b5b4 \
        --control-plane 

然后执行

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

检查是否加入成功:

kubectl get nodes

加入worker节点(all worker)

kubeadm join 192.168.200.16:6443 --token hhjmtp.2cjjir23frxovz4p \
    --discovery-token-ca-cert-hash sha256:ea552b566f04725584c53e55b124b50a24e29aa18b10deaef078cfd1e60fefd5

检查

kubectl get nodes

dashboard安装(master128)

下载配置文件

wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml

修改配置文件

vim recommended.yaml 

在相应的位置增加以下内容

修改完后执行下面的命令

kubectl apply -f recommended.yaml 

检查安装的状态

kubectl get pods -n kubernetes-dashboard

编辑创建文件dashboard-adminuser.yaml

vim dashboard-adminuser.yaml

增加以下内容,用于创建service account并绑定默认cluster-admin管理员集群角色

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

执行下面命令

kubectl apply -f dashboard-adminuser.yaml

执行下面的命令来获取token

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

使用浏览器访问30001端口,使用token登录,记得要用火狐浏览器,微软和谷歌不支持,我这里是
https://192.168.200.128:30001,把自己的token 粘贴上去即可

harbor高可用安装(master128,master130,master131)

harbor是docker的私有仓库,使用harbor可以管理k8s集群下载的镜像,将镜像统一放置一处,有效节约空间,此外还有GUI界面,高效管理集群镜像
到GitHub安装下载离线安装版本https://github.com/goharbor/harbor/releases

这里我们将harbor安装到一个master节点master128和两个worker节点master130和master131,下载后解压文件,修改hostname为宿主机ip,修改端口号,这里尽量不要占用80端口,后面有一些应用要使用80端口,接着注释掉https的内容
这里是存放选择我们磁盘最大的位置

下载docker-compose,然后执行安装文件

curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose

cd harbor

sh install.sh

在master128已经安装成功



现在在master132和master131安装,步骤是一样的,配置也一样

下载Nginx

docker pull nginx:1.20.0

Nginx的配置

vim /usr/nginx/nginx.conf

user nginx;
worker_processes  1;  # worker进程数,值越大,支持的并发数量越大,尽量与cpu数相同
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
# 第二部分:events块
events {
    worker_connections  1024;  # 每个worker进程支持的最大连接数默认为1024
}
stream{
    upstream harbor {
        server 192.168.200.130:8081;
    }
    server{
      listen       8082;
      proxy_pass   harbor;
      proxy_timeout  300s;
      proxy_connect_timeout 5s;
    }
}

docker运行脚本

vim /usr/nginx/docker-nginx.sh 

#!/bin/bash
# chkconfig: 2345 85 15
# description: auto_run

docker stop harbornginx
docker rm harbornginx
docker run -idt -p 8082:8082 --name harbornginx  -v /usr/nginx/nginx.conf:/etc/nginx/nginx.conf nginx:1.20.0

这里添加开机自动执行

chmod +x docker-nginx.sh
cp ./docker-nginx.sh /etc/init.d
chkconfig --add docker-nginx.sh
chkconfig docker-nginx.sh on
service docker-nginx.sh start

验证

netstat -anp|grep 8082

由于现在访问harbor是由Nginx转发的,Nginx拦截的是8082端口的请求,所以现在请求端口改为8082

所有的节点加入如下配置,这里主要是用于登录harbor用的

vim /etc/docker/daemon.json 

{
   "insecure-registries": ["master128:8082"]
}

systemctl restart docker //重启docker

//登录harbor仓库,然后输入账号密码即可
docker login master128:8082 

上传镜像,我在界面创建了一个名叫做k8s的项目,往这个项目传入镜像为例:

将本地的nginx1.20.0镜像传入harbor,先修改标签,标签的名称格式一定要 域名/项目名称/镜像名称:版本号

//修改镜像标签
docker tag nginx:1.20.0 master128:8082/k8s/nginx:1.20.0

//直接上传
docker push master128:8082/k8s/nginx:1.20.0

开机启动配置

vim /etc/systemd/system/harbor.service

加入如下配置,/usr/local/bin/docker-compose是我docker-compose的安装路径,自己可以通过which docker-compose查询,/usr/local/bin/harbor/docker-compose.yml是我docker-compose.yml的文件的存放路径可以通过locate docker-compose.yml查询

[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor

[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/local/bin/docker-compose -f /usr/local/bin/harbor/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f /usr/local/bin/harbor/docker-compose.yml down

[Install]
WantedBy=multi-user.target

设置开机自起

chmod +x /etc/systemd/system/harbor.service
systemctl enable harbor.service && systemctl start harbor.service && systemctl status harbor.service

ingress-nginx安装配置(master128)

igress其实就是一组基于DNS名称(host)或URL路径把请求转发到指定的Service资源的规则,用于将集群外部的请求流量转发到集群内部完成的服务发布

官方地址:
https://kubernetes.github.io/ingress-nginx/deploy/
下载文件

 wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.32.0/deploy/static/provider/cloud/deploy.yaml

我要将这个ingress-nginx的controller部署在其中一台worker节点上,这里我部署在master131

//首先给master131配置标签
kubectl label node master131 app=master131-ingress

修改刚刚下载好的deploy.yaml,找到Deployment增加以下内容


如果在Pod中使用hostNetwork:true配置网络,那么Pod中运行的应用程序可以直接使用node节点的端口,这样node节点主机所在网络的其他主机,都可以通过该端口访问到此应用程序。

应用文件

kubectl apply -f deploy.yaml

检查ingress-nginx命名空间下的服务是否都启动成功

 kubectl get all -n ingress-nginx

发现有些服务为启动,是因为镜像为下载好,过滤文件内容看看缺哪些镜像

grep image deploy.yaml
image.png

分别拉取镜像

docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
docker pull jettech/kube-webhook-certgen:v1.2.0

//验证,查看ingress-nginx下的pod启动成功了吗
kubectl get pods -n ingress-nginx

上传镜像到harbor以提供给其他的服务器使用

docker tag quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0 master128:8082/k8s/nginx-ingress-controller:0.32.0

docker tag jettech/kube-webhook-certgen:v1.2.0 master128:8082/k8s/kube-webhook-certgen:v1.2.0

docker push master128:8082/k8s/nginx-ingress-controller:0.32.0
docker push master128:8082/k8s/kube-webhook-certgen:v1.2.0

master130和master131都在harbor上拉取镜像,拉取镜像前记得登录

docker login master128:8082

docker pull  master128:8082/k8s/nginx-ingress-controller:0.32.0
docker pull  master128:8082/k8s/kube-webhook-certgen:v1.2.0

docker tag master128:8082/k8s/nginx-ingress-controller:0.32.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
docker tag master128:8082/k8s/kube-webhook-certgen:v1.2.0 jettech/kube-webhook-certgen:v1.2.0

你可能感兴趣的:(k8s学习(二):k8s高可用集群搭建)