本文对kubernetes的安装,以及集群部署过程进行记录和说明。
为了方便在离线环境中安装部署,后续内容中涉及的安装包及镜像,均为先下载,再安装。
软件 | 版本 |
---|---|
操作系统 | CentOS-7-x86_64-1810 |
Docker | 19.03.8 |
Kubernetes | 1.18.4 |
Hostname | IP | 安装组件 |
---|---|---|
k8s-master | 192.168.6.170 | kubelet kube-apiserver kube-controller-manager kube-schedule kube-proxy |
k8s-node-1 | 192.168.6.171 | kubelet kube-proxy |
k8s-node-2 | 192.168.6.172 | kubelet kube-proxy |
以下准备工作需要在集群所有机器上进行配置。如果采用虚拟机进行集群组件,要注意每个虚拟机的MAC地址需要设置为不同的地址。
systemctl stop firewalld
systemctl disable firewalld
修改/etc/selinux/config
文件,将SELINUX
修改为disabled
。
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
sed -ri 's/.*swap.*/#&/' /etc/fstab
根据集群规划,分别在各个主机上设置主机名。
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node-1
hostnamectl set-hostname k8s-node-2
在/etc/hosts
文件中追加以下内容:
192.168.6.170 k8s-master
192.168.6.171 k8s-node-1
192.168.6.172 k8s-node-2
添加vi/etc/sysctl.d/k8s.conf
配置,添加以下内容:
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
以上设置完成后,需要重启机器让设置生效。
添加安装源配置文件/etc/yum.repos.d/kubernetes.repo
,添加如下内容:
[kubernetes]
name=Kubernetes Repository
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
yum install --downloadonly --downloaddir=./ -y kubeadm-1.18.3 --disableexcludes=kubernetes
yum install --downloadonly --downloaddir=./ -y kubelet-1.18.3 --disableexcludes=kubernetes
yum install --downloadonly --downloaddir=./ -y kubectl-1.18.3 --disableexcludes=kubernetes
执行yum install命令得到rpm安装包,得到与之相关的所有安装包,版本信息如下:
kubeadm 1.18.3-0
conntrack-tools 1.4.4-7.el7
cri-tools 1.13.0-0
kubectl 1.18.5-0
kubelet 1.18.5-0
kubernetes-cni 0.8.6-0
libnetfilter_cthelper 1.0.0-11.el7
libnetfilter_cttimeout 1.0.0-7.el7
libnetfilter_queue 1.0.2-2.el7_2
socat 1.7.3.2-2.el7
[root@localhost k8s]# rpm -ivh *.rpm
准备中... ################## [100%]
正在升级/安装...
1:socat-1.7.3.2-2.el7 ################## [ 10%]
2:libnetfilter_queue-1.0.2-2.el7_2 ################## [ 20%]
3:libnetfilter_cttimeout-1.0.0-7.el################## [ 30%]
4:libnetfilter_cthelper-1.0.0-11.el################## [ 40%]
5:conntrack-tools-1.4.4-7.el7 ################## [ 50%]
6:kubelet-1.18.3-0 ################## [ 60%]
7:kubernetes-cni-0.8.6-0 ################## [ 70%]
8:kubectl-1.18.3-0 ################## [ 80%]
9:cri-tools-1.13.0-0 ################## [ 90%]
10:kubeadm-1.18.3-0 ################## [100%]
docker安装过程省略。
设置国内docker镜像源,或者阿里的镜像源,加速镜像下载过程。
[root@localhost docker]# vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
[root@localhost v1.18.4]# kubeadm config images list
W0620 14:07:32.154847 14850 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.4
k8s.gcr.io/kube-controller-manager:v1.18.4
k8s.gcr.io/kube-scheduler:v1.18.4
k8s.gcr.io/kube-proxy:v1.18.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
由于k8s.gcr.io在国内无法访问,可通过国内阿里云仓库拉取镜像。
这里将kube*镜像版本改为1.18.3。
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.3 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.3 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.3 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.3 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 && \
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
将下载后的镜像重新设置tag:
REPOSITORY TAG IMAGE ID SIZE
k8s.gcr.io/kube-proxy v1.18.3 3439b7546f29 117MB
k8s.gcr.io/kube-apiserver v1.18.3 7e28efa976bd 173MB
k8s.gcr.io/kube-controller-manager v1.18.3 da26705ccb4b 162MB
k8s.gcr.io/kube-scheduler v1.18.3 76216c34ed0c 95.3MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 683kB
k8s.gcr.io/coredns 1.6.7 67da37a9a360 43.8MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 288MB
我们使用官方提供的kubeadm工具进行k8s集群的创建及设置。
kubeadm config print init-defaults > init.default.yaml
W0703 10:50:00.528184 27266 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@k8s-master k8s]# ls
images init.default1.yaml init.default.yaml rpm
[root@k8s-master k8s]# vi init.default1.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.6.170
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.3
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
advertiseAddress要改成主节点的物理IP,kubernetesVersion版本号改为实际安装的版本号。
kubeadm init --config=init.default.yaml
控制台输出如下:
W0703 10:46:38.413591 23954 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.6.170]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.6.170 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.6.170 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
ubeconfig] Writing "kubelet.conf" kubeconfig file
▽kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0703 10:46:45.105711 23954 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0703 10:46:45.106934 23954 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.505005 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.6.170:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3daaead0025c6c7097ceb6c66cd379f9fdae3d076dddb3a6c21d2df024146dac
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
或
export KUBECONFIG=/etc/kubernetes/admin.conf
登录Node节点机器,执行以下命令:
kubeadm join 192.168.6.170:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3daaead0025c6c7097ceb6c66cd379f9fdae3d076dddb3a6c21d2df024146dac
该命令来自Master节点执行kubeadm init输出的最后一部分提示。
此时,通过Master查看Node信息:
[root@k8s-master kubernetes]# kubectl get node -A
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 66m v1.18.3
k8s-node-1 NotReady 2m51s v1.18.3
k8s-node-2 NotReady 2m34s v1.18.3
下载calico配置:
curl https://docs.projectcalico.org/v3.14/manifests/calico.yaml -o calico.yaml
参考官网教程:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
docker pull calico/node:v3.14.1
docker pullcalico/pod2daemon-flexvol:v3.14.1
docker pullcalico/cni:v3.14.1
docker pull calico/kube-controllers:v3.14.1
在集群中所有机器上都进行拉取或导入。
在Master上执行:
kubectl create -f calico.yaml
此时,查看节点状态已经是Ready状态了。
[root@k8s-master k8s]# kubectl get node -A
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 5h50m v1.18.3
k8s-node-1 Ready 4h46m v1.18.3
k8s-node-2 Ready 4h46m v1.18.3
网路插件创建完成后,可以查看pod状态已经为Running。
[root@k8s-master rpm]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-76d4774d89-5pk96 1/1 Running 0 26m
kube-system calico-node-2qvhs 1/1 Running 0 26m
kube-system calico-node-hrd48 1/1 Running 0 26m
kube-system coredns-66bff467f8-2tkcb 1/1 Running 0 72m
kube-system coredns-66bff467f8-h2v5l 1/1 Running 0 72m
kube-system etcd-k8s-master 1/1 Running 0 72m
kube-system kube-apiserver-k8s-master 1/1 Running 0 72m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 72m
kube-system kube-proxy-jw7wv 1/1 Running 0 72m
kube-system kube-proxy-z2jqr 1/1 Running 0 67m
kube-system kube-scheduler-k8s-master 1/1 Running 0 72m
集群状态:
[root@k8s-master rpm]# kubectl cluster-info
Kubernetes master is running at https://192.168.3.170:6443
KubeDNS is running at https://192.168.3.170:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
如果需要还原通过kubeadm init创建的cluster,可以使用kubeadm reset命令进行重置,需要在Master和Node节点上执行。
yum install --downloadonly --downloaddir=./ -y bash-completion
rpm -ivh bash-completion-2.1-8.el7.noarch.rpm
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
现在可以在命令行使用Tab键进行kubectl命令补全了。
至此,kubernetes集群的创建过程就结束了。我们可以在K8S集群环境中部署我们应用进行研究和学习了。