Kubernetes 1.41.1 内网集群部署

第1部分:主机环境初始化

所有主机采用 CentOS 7 操作系统,我们通过脚本实现批量初始化主机的环境。在每个主机都执行下面的脚本(找一个web服务放上脚本然后每个主机 curl ..... | bash

此脚本依次进行如下操作(部分地方需要你改成适合你的配置):

  1. 关闭SELinux
  2. 关闭Firewalld防火墙
  3. 设置内网DNS
  4. 使用内网Yum源
  5. 使用内网NTP时钟服务
  6. 更新主机名
  7. 关闭rsyslog日志服务
#!/usr/bin/env bash
set -o errexit -o nounset -o pipefail
echo ">>> Kernel: $(uname -r)"

echo ">>> 1. Disabling SELinux"
cp /etc/sysconfig/selinux{,.orig}
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
eval `setenforce 0`

echo ">>> 2. Disabling Firewalld"
systemctl stop firewalld
systemctl disable firewalld

echo ">>> 3. Adjusting DNS Configuration"
cp /etc/resolv.conf{,.orig}
cat << 'EOF' > "/etc/resolv.conf"
domain cloud.company.com
search cloud.company.com company.com
options timeout:2
nameserver 10.1.1.2
nameserver 10.1.1.1
EOF

echo ">>> 4. Adjusting Yum Source to local"
mv /etc/yum.repos.d/CentOS-Base.repo{,.orig}
curl http://yum.cloud.company.com/CentOS-Base.repo -o /etc/yum.repos.d/CentOS-Base.repo 

echo ">>> 5. Enabling NTP"
yum -y install chrony
cp /etc/chrony.conf{,.orig}
cat << 'EOF' > "/etc/chrony.conf"
server ntp.cloud.company.com iburst
server 10.1.1.2 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
EOF
systemctl enable chronyd
systemctl start chronyd
timedatectl set-ntp true
timedatectl set-local-rtc true

echo ">>> 6. Updating hostname"
HOSTNAME=$(hostname -s)
hostnamectl set-hostname ${HOSTNAME}.cloud.company.com

echo ">>> 7. Disable rsyslog"
systemctl disable rsyslog

sync

第2部分:安装docker环境

在所有主机上都执行下列命令。命令首先安装了 docker-ce-18.06.2 ,然后还设置了 docker 使用 overlay2。参见 CRI installation

echo ">>> Install Docker"
yum install -y docker-ce-18.06.2.ce
systemctl enable docker

echo ">>> Customizing Docker"
mkdir -p /etc/docker/
cat > /etc/docker/daemon.json <

第3部分:安装Kubernetes核心组件

我们使用 kubeadm 工具来安装 kubernetes,这部分步骤参考了 使用 kubeadm 创建一个单主集群

3.1 master节点安装

首先通过 yum 安装 kubeletkubeadmkubectl

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

然后配置路由:

cat <  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

然后是准备 kubernetes 组件的镜像,镜像的清单可以通过 kubeadm config images list 命令获取。
由于我们内网已经存在使用 Harbor 搭建的 Docker 镜像仓库,我就把镜像都设法转存到镜像仓库,然后 Master 节点从本地镜像仓库获取 Kubernetes 镜像。脚本如下:

images=(kube-apiserver:v1.14.1 kube-controller-manager:v1.14.1 kube-scheduler:v1.14.1 kube-proxy:v1.14.1 pause:3.1 etcd:3.3.10 coredns:1.3.1)
for imageName in ${images[@]} ; do
  docker pull harbor.cloud.company.com/mirror/$imageName
  docker tag harbor.cloud.company.com/mirror/$imageName k8s.gcr.io/$imageName
  docker rmi harbor.cloud.company.com/mirror/$imageName
done

到目前为止, Master 节点已经准备好了 kubeletkubeadmkubectl 工具,并且镜像已经缓存到了本地,接下来就可以使用 kubeadmMaster 节点部署 Kubernetes 了。只需要执行下面一行:

kubeadm init  --pod-network-cidr=10.244.0.0/16

执行后会打印如下的日志:

I0424 16:01:53.960348   23668 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0424 16:01:53.960403   23668 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master.cloud.company.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.*.*.*]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master.cloud.company.com localhost] and IPs [10.*.*.* 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master.cloud.company.com localhost] and IPs [10.*.*.* 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.503121 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master.cloud.company.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master.cloud.company.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: yci*****************ttd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.*.*.*:6443 --token yci*****************ttd \
    --discovery-token-ca-cert-hash sha256:eba**********************************d5f 

3.2 安装网络插件

Master 节点执行完 kubeadm init 后,还需要安装网络插件,我采用的是官方推荐的 flannel 作为 Kubernetes 集群的网络插件,也是一句话:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml

如果遇到和我一样的内网环境,那还是先把flannel的镜像转储到本地镜像仓库,然后分发到所有节点

docker pull harbor.cloud.company.com/mirror/flannel:v0.11.0-amd64
docker tag ff281650a721 quay.io/coreos/flannel:v0.11.0-amd64
docker rmi harbor.cloud.company.com/mirror/flannel:v0.11.0-amd64

3.3 安装node节点

首先,node 节点的准备和 master 节点差不多,也就说基本上直到 3.1 节的前半部分都需要做,区别也就2点:

  1. 不要执行 kubeadm init
  2. 需要准备的镜像没那么多,只需要 kubeproxypauseflannel
images=(kube-proxy:v1.14.1 pause:3.1)
for imageName in ${images[@]} ; do
  docker pull harbor.cloud.company.com/mirror/$imageName
  docker tag harbor.cloud.company.com/mirror/$imageName k8s.gcr.io/$imageName
  docker rmi harbor.cloud.company.com/mirror/$imageName
done

node 节点准备好 kubeletkubeadmkubectl 工具,还有 kubeproxypauseflannel 镜像之后,就可以部署了,也是一句话,这里要用到 3.1节末尾的那句命令:

kubeadm join 10.*.*.*:6443 --token yci*****************ttd \
    --discovery-token-ca-cert-hash sha256:eba**********************************d5f 

执行完之后,kubeadm会自动部署 node 节点,并将 node 节点加入集群了。

你可能感兴趣的:(Kubernetes 1.41.1 内网集群部署)