kubeadm工具搭建高可用负载k8s集群

关键字: k8s(kubernetes),docker,kubelet,kubeadm,kubectl

环境描述

  • 安装方式概述

    • k8s 组件为镜像安装
    • yum 安装kubelet,kubeadm,kubectl,利用kubeamd 进行初始化集群,加入主节点和从节点
    • 利用keepalived + HAproxy 实现负载
  • 主机环境描述

IP 节点 安装服务
172.16.0.105 master-1 docker-ce19.03.5,etcd3.3.12,kubernetes1.15.1
172.16.0.106 master-2 docker-ce19.03.5,etcd3.3.12,kubernetes1.15.1
172.16.0.136 master-3 docker-ce19.03.5,etcd3.3.12,kubernetes1.15.1
172.16.0.109 k8s-node-1 kubernetes1.15.1
172.16.0.110 k8s-node-2 kubernetes1.15.1
IP 节点 安装服务
172.16.0.129 keepalived+haproxy keepalived2.0.18,haproxy2.0.7
172.16.0.130 keepalived+haproxy keepalived2.0.18,haproxy2.0.7
VIP 作用
172.16.0.202 k8s集群的apiserver代理

部署用户:app

安装步骤

keepalived+haproxy安装 -keepalived + haproxy 节点操作

1、安装keepalived + haproxy (这里为yum 安装-可选更新yum源再安装高版本)
    sudo yum install -y  keepalived  haproxy
2、配置HA

# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
 
vrrp_script check_haproxy {
        script "/etc/keepalived/check_haproxy.sh"
        interval 3
        weight -20
}
 
vrrp_instance K8S {
    state backup 
    interface ens192    #ip addr  查看网卡名称
    virtual_router_id 44
    priority 200        #master1 200; master2 190; master3 180 其他配置一样
    advert_int 5
    authentication {
        auth_type PASS
        auth_pass kubernetes
    }
    virtual_ipaddress {
        172.16.0.202
 
    }
    track_script {
        check_haproxy
   }
 
}
检测脚本: sudo vim /etc/keepalived/check_haproxy.sh  
#!/bin/bash
active_status=`netstat -lntp|grep haproxy|wc -l`
if [ $active_status -gt 0 ]; then
    exit 0
else
    /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg
    sleep 2
    if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
        killall5 keepalived
    fi
fi
赋予执行权限 chmod u+x /etc/keepalived/check_haproxy.sh 

3、配置haproxy  sudo vim /etc/haproxy/haproxy
global
 
    log         127.0.0.1 local2
 
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        app
    group       app
    daemon
    stats socket /var/lib/haproxy/stats
defaults
    mode                    tcp
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
frontend k8s_apiserver
  mode tcp
  bind *:6443
  default_backend k8s-master

backend k8s-master
    mode        tcp
    balance     roundrobin
    server  k8s-master-1  172.16.0.105:6443 check     # 三个master主机
    server  k8s-master-2  172.16.0.106:6443 check
    server  k8s-master-3  172.16.0.136:6443 check
    
4、重定向keepalived haproxy日志
    - keepalived:
        sudo vim /etc/sysconfig/keepalived 
        将KEEPALIVED_OPTIONS="-D" 修改
        KEEPALIVED_OPTIONS="-D -d -S 4"
    - sudo vim /etc/rsyslog.conf添加如下内容
        local2.*                                /data/logs/haproxy.log
        local4.*                                /data/logs/keepalived.log
    - 创建日志路径:mkdir /data/logs
    - 重启rsyslog 服务: sudo systemctl restart rsyslog
    
5、启动keepalived haproxy
    - keepalived: sudo systemctl start keepalived
    - haproxy: sudo haproxy -f /etc/haproxy/haproxy.cfg
    
6、验证查看keepalived 日志,关闭主上haproxy看是否能正常切换到从

k8s集群搭建

1、 环境准备 -没有特别备注则master,node节点操作

- 关闭firewlld: sudo systemctl stop firewall (centos7)  sudo service firewalld stop(centos 6)
- 关闭swap: sudo swapoff -a

- 直接设置内核参数:
vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1    
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_port_range = 10000 65000
fs.file-max = 2000000
net.ipv4.ip_forward = 1
vm.swappiness = 0

- 使其生效
sysctl -p

- 创建所需目录
sudo mkdir -p /opt/kubernetes/{cfg,ssl,bin}
chown app:app -R /opt/kubernets   ---根据实际环境环境用户修改

- 添加/etc/hosts
172.16.0.105    master-1
172.16.0.106    master-2
172.16.0.136    master-3
172.16.0.109    k8s-node-1
172.16.0.110    k8s-node-2

- 安装docker-ce
安装依赖:sudo  yum  install -y yum-utils  device-mapper-persisren-data lvm2 
添加docker软件包源:sudo  yum-config-manager  --add-repo https://download.docker.com/linux/centos/docker-ce.repo
更新yum包索引:sudo  yum makecache fast
安装docker CE : sudo yum install -y docker-ce 
利用国内的源提供下载速度:sudo  curl -sSL https://get.daocloud.io/daotools/set_mirror.sh|sh -s http://04be47cf.m.daocloud.io 
启动:sudo systemctl start docker 
      sudo systemctl enable docker
      
- 下载cfssl - master-1操作
cd /opt/kubernetes/bin
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /opt/kubernetes/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /opt/kubernetes/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /opt/kubernetes/bin/cfssl-certinfo
chmod u+x ./*

- 添加环境变量 - master-1操作
echo 'PATH=$PATH:/opt/kubernetes/bin'>>/etc/profile
echo 'PATH=$PATH:/opt/kubernetes/bin'>>~/.bash_profile
. /etc/profile
. ~/.bash_profile

2、 制作CA,ETCD证书

cd /opt/kubernetes/ssl
vim ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

vim ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}

vim etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "172.16.0.105",
    "172.16.0.106",
    "172.16.0.136",
    "172.16.0.202"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

- 生成key文件
cd /opt/kubernetes/ssl
cfssl gencert -initca ca-csr.json | cfssljson -bare ca    # 生成ca key文件
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd   #生成etcd key文件
  
- 将生成的证书分发到其他主节点
scp -r /opt/kubernetes/ssl [email protected]:/opt/kubernetes/ssl
scp -r /opt/kubernetes/ssl [email protected]:/opt/kubernetes/ssl

3、etcd集群安装

- 安装etcd,master节点集群同时操作
    解压安装包,放置/data/package
    cd /data/package/etcd-v3.3.12-linux-amd64
    cp -a etcd* /opt/kubernetes/bin

- 配置etcd配置文件--在3台主节点同时操作
#[member]
ETCD_NAME="etcd-node-1"
ETCD_DATA_DIR="/var/lib/etcd/"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://172.16.0.105:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.0.105:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.0.105:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-node-1=https://172.16.0.105:2380,etcd-node-2=https://172.16.0.106:2380,etcd-node-3=https://172.16.0.136:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.0.105:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

- 创建etcd的服务器启动文件  --3台主节点同时操作
sudo vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos/etcd
Conflicts=etcd.service
Conflicts=etcd2.service

[Service]
Type=notify
Restart=always
RestartSec=5s
LimitNOFILE=40000
TimeoutStartSec=0
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"

[Install]
WantedBy=multi-user.target

- 启动etcd集群
[app@master01~]$ sudo mkdir -p /var/lib/etcd/default.etcd
[app@master01~]$ sudo systemctl daemon-reload
[app@master01~]$ sudo systemctl start etcd
[app@master01~]$ sudo systemctl enable etcd

- 验证集群
etcdctl --endpoints=https://172.16.0.105:2379 \
 --ca-file=/opt/kubernetes/ssl/ca.pem \
 --cert-file=/opt/kubernetes/ssl/etcd.pem \
 --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
输出如下为搭建正常:
member 62433e04590b6324 is healthy: got healthy result from https://172.16.0.106:2379
member 775c242b93af59c7 is healthy: got healthy result from https://172.16.0.105:2379
member 9359be5f6c73cdb8 is healthy: got healthy result from https://172.16.0.136:2379
cluster is healthy

4、kubernetes master节点安装

  • 安装kubelet,kubeadm,kubectl
配置阿里镜像源
sudo vim /etc/yum.repos.d/kubernets.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
sudo yum install -y kubelet-1.15.1 kubeadm-1.15.1 kubectl-1.15.1

  • 下载相关镜像拉取与装载 --教程可参考https://www.cnblogs.com/baylorqu/p/10754924.html master操作
- 我这边直接使用别人下载好的镜像上传安装:https://pan.baidu.com/s/12T4Su-KxAlq-qYXD7zOCeQ#list/path=%2F 提取码: z6dg
- ll /data/k8s-images
total 617528
-rw-r--r-- 1 app app  40542720 Dec 23 15:01 coredns_1.3.1.tar
-rw-r--r-- 1 app app  55390720 Dec 23 15:09 flannel_v0.11.0-amd64.tar
-rw-r--r-- 1 app app 208394752 Dec 23 15:01 kube-apiserver_v1.15.1.tar
-rw-r--r-- 1 app app 160290304 Dec 23 15:01 kube-controller-manager_v1.15.1.tar
-rw-rw-r-- 1 app app  84282368 Dec 23 15:42 kube-proxy_v1.15.1.tar
-rw-r--r-- 1 app app  82675200 Dec 23 15:00 kube-scheduler_v1.15.1.tar
-rw-rw-r-- 1 app app    754176 Dec 23 15:42 pause_3.1.tar
装载镜像: for i in *;do sudo docker load < $i;done
  • 下载修改kubeadm-conf.yaml 和 kube-flannel.yml 配置文件 --master-1 操作 可在上个网盘下载
cd /opt/kubernetes/cfg
vim kubeadm-conf.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
---
apiServer:
  timeoutForControlPlane: 4m0s
  certSANs:
  - 172.16.0.202
  - 172.16.0.105
  - 172.16.0.106
  - 172.16.0.136
  - "master-1"
  - "master-2"
  - "master-3"
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes

controllerManager: {}
dns:
  type: CoreDNS
etcd:
  external:
    endpoints:
    - https://172.16.0.105:2379
    - https://172.16.0.106:2379
    - https://172.16.0.136:2379
    caFile: /opt/kubernetes/ssl/ca.pem
    certFile: /opt/kubernetes/ssl/etcd.pem
    keyFile: /opt/kubernetes/ssl/etcd-key.pem
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
controlPlaneEndpoint: "172.16.0.202:6443"
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.92.0.0/16
  podSubnet: 10.2.0.0/16
scheduler: {}

vim kube-flannel.yml
将此处Network 后的ip修改为上个配置文件定义的pod 的ip网段
net-conf.json: |
    {
      "Network": "10.2.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
  • 初始化主节点
kubeadm init --config kubeadm-conf.yaml --ignore-preflight-errors=swap
出现 mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube
    sudo chown $(id-u):$(id-g) $HOME/.kube
kubeadm join 172.16.0.202:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:f40bed66ee0ddf93157f1c3f98d7aa9378c6d93ec1c5b4177d9e616bfb43534b \
    --control-plane         #主节点加入
kubeadm join 172.16.0.202:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:f40bed66ee0ddf93157f1c3f98d7aa9378c6d93ec1c5b4177d9e616bfb43534b      #从节点加入
    

  • master-1 上执行如下语句
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 启动flannel网络
cd /opt/kubernetes/cfg
kubectl create -f kube-flannel.yml
  • 添加master-2,master-3
- 将master-1节点 /etc/kubernetes/pki 拷贝到master-2,master-3
sudo scp -r /etc/kubernetes/pki [email protected]:/etc/kubernetes/pki

- 主节点加入集群
sudo kubeadm join 172.16.0.202:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:f40bed66ee0ddf93157f1c3f98d7aa9378c6d93ec1c5b4177d9e616bfb43534b \
    --control-plane

5、node节点加入集群

  • 环境准备 :... ...
  • 装载flannel镜像
  • 在master节点查看加入node节点命令
sudo kubeadm token create --print-join-command

  • 加入node节点--从节点操作
sudo kubeadm join 172.16.0.202:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:f40bed66ee0ddf93157f1c3f98d7aa9378c6d93ec1c5b4177d9e616bfb43534b
  • 查看节点状态-master节点操作
[app@master-1 cfg]$ kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-node-1   Ready       153m   v1.15.1
k8s-node-2   Ready       151m   v1.15.1
master-1     Ready    master   167m   v1.15.1
master-2     Ready    master   159m   v1.15.1
master-3     Ready    master   157m   v1.15.1

dashbord安装——master-1上操作

  • 下载kubernets-dashboard.ymal文件
 /data/kubernets-dashboard
wget http://pencil-file.oss-cn-hangzhou.aliyuncs.com/blog/kubernetes-dashboard.yaml
  • 修改kubernet-dashboard.ymal中的镜像源地址和port类型
--修改镜像源
 vim kubernet-dashboard.ymal匹配image将其改为
     registry.cn-hangzhou.aliyuncs.com/lynchj/kubernetes-dashboard-amd64:v1.10.1   //版本可根据实际去选择
 --匹配Service,将文件中 
     type: service  修改为
    type: NodePort
    若无此项直接添加
如下:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort:30001    #指定访问端口
  selector:
    k8s-app: kubernetes-dashboard
  • 创建kubernet-dashboard pod
kubectl apply -f kubernetes-dashboard.yaml //创建
kubectl get pods --namespace=kube-system    //NAME下有kubernet-dashboard
  • 创建kubenerts-dash admin 账号
vim admin-token.yaml 
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcil
创建用户:
 kubectl create -f admin-token.yaml
  • 获取对应用户的token
kubectl get secret -n kube-system |grep admin|awk '{print $1}'
输出结果:admin-token-f5q68
kubectl describe secret admin-token-f5q68 -n kube-system|grep '^token'|awk '{print $2}'
输出结果: admin 账号登录的token
  • 登录
https://172.16.0.105:30001

参考资料

https://blog.csdn.net/qq_31547771/article/details/100699573
https://blog.csdn.net/wangshuminjava/article/details/92783296

你可能感兴趣的:(kubeadm工具搭建高可用负载k8s集群)