基于CentOS7.5搭建Kubernetes集群

为了方便区分,所有执行的命令都用粗斜体标记
参考文章
https://www.kubernetes.org.cn/4948.html
https://www.kubernetes.org.cn/5025.html

环境准备

  • 服务器
    Virtual IP:192.168.3.88
    k8s-master,192.168.3.80
    k8s-node1,192.168.3.81
    k8s-node2,192.168.3.82
    k8s-node3,192.168.3.83
    k8s-storage1,192.168.3.86
    docker-registry,192.168.3.89

  • 基础环境
    基于CentOS-7-x86_64-Minimal-1810最小安装

需要做的工作包括如下内容

  • 更新系统
  • 关闭 SELINUX
  • 关闭交换分区
  • 调整时区并同步时间
  • 升级内核

系统安装完成后,执行以下命令配置基础环境

yum update -y
yum install wget net-tools yum-utils vim -y
修改源为阿里云
-- 先备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
-- 下载
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
-- 禁用c7-media库
yum-config-manager --disable c7-media
-- 或者 vim /etc/yum.repos.d/CentOS-Media.repo 修改enabled的值为0

  • 时钟同步
    rm -rf /etc/localtime
    vim /etc/sysconfig/clock
    -- 文件中添加 Zone=Asia/Shanghai
    ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    reboot
    使用
    date -R
    确认是+8时区,如果不是就重来一次上面的操作。
    -- 安装ntp服务
    yum install ntp -y
    -- 修改成国内时区并同步
    timedatectl set-timezone Asia/Shanghai
    timedatectl set-ntp yes
    -- 查看时间确保同步
    timedatectl
    或者执行以下命令也可以完成时钟同步
    yum install -y ntpdate
    ntpdate -u ntp.api.bz

  • 关闭SELINUX
    vim /etc/sysconfig/selinux
    SELINUX=permissive 修改为 SELINUX=disabled

  • 关闭Selinux/firewalld
    systemctl stop firewalld
    systemctl disable firewalld
    setenforce 0
    sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

  • master节点到其他节点的ssh免登陆
    首先在每台机器上执行
    ssh 本机IP
    exit

    在master节点执行如下名称
    ssh-keygen -t rsa
    ssh-copy-id 192.168.3.81
    ssh-copy-id 192.168.3.82
    ssh-copy-id 192.168.3.83
    这里要注意,每台机器都要保证能访问自己也是免密的

  • 关闭交换分区
    swapoff -a
    yes | cp /etc/fstab /etc/fstab_bak
    cat /etc/fstab_bak |grep -v swap > /etc/fstab

  • 设置网桥包经IPTables,core文件生成路径
    echo """
    vm.swappiness = 0
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    """ > /etc/sysctl.conf

    sysctl -p

  • 查看内核版本
    lsb_release -a
    如果提示命令不存在则安装
    yum install -y redhat-lsb

  • 要求集群中所有机器具有不同的Mac地址、产品uuid、Hostname
    cat /sys/class/dmi/id/product_uuid
    ip link

  • 解决“cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录”的问题
    先安装相关库
    yum install -y epel-release
    yum install -y conntrack ipvsadm ipset jq sysstat curl iptables
    执行如下配置命令
    安装模块
    modprobe br_netfilter
    modprobe ip_vs
    添加配置项
    cat > /etc/rc.sysinit < net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    net.ipv4.ip_forward=1
    vm.swappiness=0
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    EOF

    sysctl -p
    成功!


  • 升级内核
    导入KEY文件
    rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
    安装yum源,使用elrepo源
    rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
    查看可用内核版本
    yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
    升级内核到最新版
    yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y
    或者从https://elrepo.org/linux/kernel/el7/x86_64/RPMS/下载4.20的主线稳定版
    wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml-4.20.13-1.el7.elrepo.x86_64.rpm
    wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.20.13-1.el7.elrepo.x86_64.rpm
    yum install -y kernel-ml-4.20.13-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.20.13-1.el7.elrepo.x86_64.rpm

    检查默认内核版本是否大于4.14,否则请调整默认启动参数
    grub2-editenv list
    查看当前有几个内核
    cat /boot/grub2/grub.cfg |grep menuentry
    设置默认启动内核
    grub2-set-default "CentOS Linux (4.20.13-1.el7.elrepo.x86_64) 7 (Core)"
    或直接更改内核启动顺序
    grub2-set-default 0
    重启以更换内核
    reboot
    查看内核
    uname -r

  • 备份虚拟机
    将以上操作结束后的虚拟机状态备份一下,后续如果搞乱了环境可以直接使用此状态进行恢复。


安装docker环境

  • 所有主机都要安装docker
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

    yum makecache fast
    yum install -y docker-ce
    编辑systemctl的Docker启动文件
    sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service
    启动docker
    systemctl daemon-reload
    systemctl enable docker
    systemctl start docker

搭建docker-registry私服

配置docker加速器(仅限于私服机器,本文是192.168.3.89)
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://7471d7b2.mirror.aliyuncs.com"]} EOF
systemctl daemon-reload
systemctl restart docker
安装 registry
docker pull registry:latest
下载链接:链接:https://pan.baidu.com/s/1ZdmgnrYGVobc22FX__vYwg 提取码:69gq ,随后将该文件放置到registry机器上,并在registry主机上加载、启动该镜像(嘉定该镜像在/var/lib/docker目录下)
docker load -i /var/lib/docker/k8s-repo-1.13.0
运行 docker images查看镜像
docker run --restart=always -d -p 80:5000 --name repo harbor.io:1180/system/k8s-repo:v1.13.0
或者
docker run --restart=always -d -p 80:5000 --privileged=true --log-driver=none --name registry -v /home/registrydata:/tmp/registry harbor.io:1180/system/k8s-repo:v1.13.0

在浏览器输入http://192.168.3.89/v2/_catalog

浏览器显示如上图则服务正常

  • 所有非registry主机配置私有源
    mkdir -p /etc/docker
    echo -e '{\n"insecure-registries":["k8s.gcr.io", "gcr.io", "quay.io"]\n}' > /etc/docker/daemon.json
    systemctl restart docker
    此处应当修改为registry所在机器的IP
    REGISTRY_HOST="192.168.3.89"
    设置Hosts
    yes | cp /etc/hosts /etc/hosts_bak
    cat /etc/hosts_bak|grep -vE '(gcr.io|harbor.io|quay.io)' > /etc/hosts
    echo """ $REGISTRY_HOST gcr.io harbor.io k8s.gcr.io quay.io """ >> /etc/hosts

安装配置kubernetes(master & worker)

首先下载链接:链接:https://pan.baidu.com/s/1t3EWAt4AET7JaIVIbz-zHQ 提取码:djnf ,并放置在k8s各个master和worker主机上,我放在/home下
yum install -y socat keepalived ipvsadm
cd /home/
scp k8s-v1.13.0-rpms.tgz 192.168.3.81:/home
scp k8s-v1.13.0-rpms.tgz 192.168.3.82:/home
scp k8s-v1.13.0-rpms.tgz 192.168.3.83:/home

然后依次在每台机器上执行如下命令
cd /home
tar -xzvf k8s-v1.13.0-rpms.tgz
cd k8s-v1.13.0
rpm -Uvh * --force
systemctl enable kubelet
kubeadm version -o short

  • 部署HA Master
    先使用ifconfig -a 查看网卡设备名,这里是enp0s3
    在192.168.3.80上执行
    cd ~/
    echo """
    CP0_IP=192.168.3.80
    CP1_IP=192.168.3.81
    CP2_IP=192.168.3.82
    VIP=192.168.3.88
    NET_IF=enp0s3
    CIDR=10.244.0.0/16
    """ > ./cluster-info

    bash -c "$(curl -fsSL https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/kubeha-gen.sh)"
    该步骤将可能持续2到10分钟,在该脚本进行安装部署前,将有一次对安装信息进行检查确认的机会
    执行结束记住输出的join信息
    join command:
    kubeadm join 192.168.3.88:6443 --token 29ch4f.0zshhxkh0ii4q9ej --discovery-token-ca-cert-hash sha256:a3cb2e754064b1ea4871a12f2a31dd2a776cc32a6dde57e5009cabca520cb56f
  • 安装helm
    如果需要安装helm,请先下载离线包链接:链接:https://pan.baidu.com/s/1B7WHuomXOmZKhHai4tV5MA 提取码:kgzi
    cd /home/
    tar -xzvf helm-v2.12.0-linux-amd64.tar
    cd linux-amd64
    cp helm /usr/local/bin
    helm init --service-account=kubernetes-dashboard-admin --skip-refresh --upgrade
    helm version
  • 加入work node
    在要接入集群的节点主机执行命令
    kubeadm join 192.168.3.88:6443 --token 29ch4f.0zshhxkh0ii4q9ej --discovery-token-ca-cert-hash sha256:a3cb2e754064b1ea4871a12f2a31dd2a776cc32a6dde57e5009cabca520cb56f

挂载扩展存储


  • kubeha-gen.sh脚本内容如下(建议将脚本下载到本地然后修改其中的邮件地址等信息):
    #/bin/bash

function check_parm()
{
if [ "${2}" == "" ]; then
echo -n "${1}"
return 1
else
return 0
fi
}

if [ -f ./cluster-info ]; then
source ./cluster-info
fi

check_parm "Enter the IP address of master-01: " ${CP0_IP}
if [ $? -eq 1 ]; then
read CP0_IP
fi
check_parm "Enter the IP address of master-02: " ${CP1_IP}
if [ $? -eq 1 ]; then
read CP1_IP
fi
check_parm "Enter the IP address of master-03: " ${CP2_IP}
if [ $? -eq 1 ]; then
read CP2_IP
fi
check_parm "Enter the VIP: " ${VIP}
if [ $? -eq 1 ]; then
read VIP
fi
check_parm "Enter the Net Interface: " ${NET_IF}
if [ $? -eq 1 ]; then
read NET_IF
fi
check_parm "Enter the cluster CIDR: " ${CIDR}
if [ $? -eq 1 ]; then
read CIDR
fi

echo """
cluster-info:
master-01: ${CP0_IP}
master-02: ${CP1_IP}
master-02: ${CP2_IP}
VIP: ${VIP}
Net Interface: ${NET_IF}
CIDR: ${CIDR}
"""
echo -n 'Please print "yes" to continue or "no" to cancel: '
read AGREE
while [ "${AGREE}" != "yes" ]; do
if [ "${AGREE}" == "no" ]; then
exit 0;
else
echo -n 'Please print "yes" to continue or "no" to cancel: '
read AGREE
fi
done

mkdir -p ~/ikube/tls

IPS=(${CP0_IP} ${CP1_IP} ${CP2_IP})

PRIORITY=(100 50 30)
STATE=("MASTER" "BACKUP" "BACKUP")
HEALTH_CHECK=""
for index in 0 1 2; do
HEALTH_CHECK=${HEALTH_CHECK}"""
real_server ${IPS[$index]} 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
"""
done

for index in 0 1 2; do
ip=${IPS[${index}]}
echo """
global_defs {
router_id LVS_DEVEL
}

vrrp_instance VI_1 {
state ${STATE[{NET_IF}
virtual_router_id 80
priority ${PRIORITY[{VIP}
}
}

virtual_server ${VIP} 6443 {
delay_loop 6
lb_algo loadbalance
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 0
protocol TCP

${HEALTH_CHECK}
}
""" > ~/ikube/keepalived-${index}.conf
scp ~/ikube/keepalived-${index}.conf ${ip}:/etc/keepalived/keepalived.conf

ssh ${ip} "
systemctl stop keepalived
systemctl enable keepalived
systemctl start keepalived
kubeadm reset -f
rm -rf /etc/kubernetes/pki/"
done

echo """
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
controlPlaneEndpoint: "${VIP}:6443"
apiServer:
certSANs:
- ${CP0_IP}
- ${CP1_IP}
- ${CP2_IP}
- ${VIP}
networking:
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: ${CIDR}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
""" > /etc/kubernetes/kubeadm-config.yaml

kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf ${HOME}/.kube/config

kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/calico/rbac.yaml
curl -fsSL https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/calico/calico.yaml | sed "s!8.8.8.8!${CP0_IP}!g" | sed "s!10.244.0.0/16!${CIDR}!g" | kubectl apply -f -

JOIN_CMD=kubeadm token create --print-join-command

for index in 1 2; do
ip=${IPS[${index}]}
ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.key
scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf $ip:~/.kube/config

ssh ${ip} "${JOIN_CMD} --experimental-control-plane"
done

echo "Cluster create finished."

echo """
[req]
distinguished_name = req_distinguished_name
prompt = yes

[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_value = CN

stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_value = Dalian

localityName = Locality Name (eg, city)
localityName_value = Haidian

organizationName = Organization Name (eg, company)
organizationName_value = Channelsoft

organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_value = R & D Department

commonName = Common Name (eg, your name or your server's hostname)
commonName_value = *.multi.io

emailAddress = Email Address
emailAddress_value = [email protected]
""" > ~/ikube/tls/openssl.cnf
openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days 3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key
kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt --key ~/ikube/tls/tls.key
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/traefik.yaml
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/metrics.yaml
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/kubernetes-dashboard.yaml

echo "Plugin install finished."
echo "Waiting for all pods into 'Running' status. You can press 'Ctrl + c' to terminate this waiting any time you like."
POD_UNREADY=`kubectl get pods -n kube-system 2>&1|awk '{print $3}'|grep -vE 'Running|STATUS'`
NODE_UNREADY=`kubectl get nodes 2>&1|awk '{print $2}'|grep 'NotReady'`
while [ "${POD_UNREADY}" != "" -o "${NODE_UNREADY}" != "" ]; do
sleep 1
POD_UNREADY=`kubectl get pods -n kube-system 2>&1|awk '{print $3}'|grep -vE 'Running|STATUS'`
NODE_UNREADY=`kubectl get nodes 2>&1|awk '{print $2}'|grep 'NotReady'`
done

echo

kubectl get cs
kubectl get nodes
kubectl get pods -n kube-system

echo """
join command:
`kubeadm token create --print-join-command`"""


你可能感兴趣的:(基于CentOS7.5搭建Kubernetes集群)