一、Kubernetes简介
Kubernetes(简称K8S)是开源的容器集群管理系统,可以实现容器集群的自动化部署、自动扩缩容、维护等功能。它既是一款容器编排工具,也是全新的基于容器技术的分布式架构领先方案。在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等功能,提高了大规模容器集群管理的便捷性。
K8S集群中有管理节点与工作节点两种类型。管理节点主要负责K8S集群管理,集群中各节点间的信息交互、任务调度,还负责容器、Pod、NameSpaces、PV等生命周期的管理。工作节点主要为容器和Pod提供计算资源,Pod及容器全部运行在工作节点上,工作节点通过kubelet服务与管理节点通信以管理容器的生命周期,并与集群其他节点进行通信。
二、环境准备
Kubernetes支持在物理服务器或虚拟机中运行,本次使用阿里云云服务器ECS,硬件配置信息如表所示:
注:在所有节点上进行如下操作
1.关闭防火墙、selinux和swap。
systemctl stop firewalld
systemctl disable firewalld
setenforce 0 #实时动态关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config #禁止重启后自动开启
swapoff -a #实时动态关闭
sed -i '/ swap / s/^/#/' /etc/fstab #禁止重启后自动开启
2.配置内核参数,将桥接的IPv4流量传递到iptables的链
cat </etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
modprobe br_netfilter #执行该命令 如果不执行就会在应用k8s.conf时出现加载错误
sysctl -p /etc/sysctl.d/k8s.conf #应用配置文件
3.配置国内yum源
mkdir -p /etc/yum.repos.d/bak
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
4.配置国内Kubernetes源
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
5.主机时间同步
yum install -y ntp
systemctl enable ntpd && systemctl start ntpd
cat </etc/ntp.conf
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
server ntp.aliyun.com iburst
#server 127.127.1.0 iburst local clock
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
EOF
systemctl restart ntpd
三、软件安装
注:在所有节点上进行如下操作
1.安装docker
# step 1: 安装必要的一些系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安装Docker-CE
yum makecache fast
yum -y install docker-ce-18.03.1.ce-1.el7.centos
# Step 4: 开启Docker服务
systemctl start docker
# Step 5: 开机自启
systemctl enable docker
docker服务为容器运行提供计算资源,是所有容器运行的基本平台。
2.安装kubeadm、kubelet、kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet
Kubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。Kubectl是Kubernetes集群管理工具。
四、部署master 节点
注:在master节点上进行如下操作
获取默认的初始化参数文件
kubeadm config print init-defaults > init.default.yaml
修改初始化参数文件
advertiseAddress: 172.22.207.108
imageRepository: registry.aliyuncs.com/google_containers
controlPlaneEndpoint: "172.22.207.108:6443"
podSubnet: 192.168.0.0/16
查看和拉取K8S集群需要的镜像
kubeadm config images list --config=init.default.yaml
kubeadm config images pull --config=init.default.yaml
运行kubeadm init命令安装master
kubeadm init --config=init.default.yaml
W0827 23:08:14.128814 2282 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [iz2zeb5993exopp7ser0cmz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.22.207.108]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [iz2zeb5993exopp7ser0cmz localhost] and IPs [172.22.207.108 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [iz2zeb5993exopp7ser0cmz localhost] and IPs [172.22.207.108 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0827 23:08:17.513849 2282 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0827 23:08:17.515440 2282 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.501502 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node iz2zeb5993exopp7ser0cmz as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node iz2zeb5993exopp7ser0cmz as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.22.207.108:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:8ee560a10e7e6d5476f7a2962dabb387088523a56799461bb444a4b09dc4eaad
复制配置文件到home目录下
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
查看节点
[root@iZ2zeb5993exopp7ser0cmZ ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
iz2zeb5993exopp7ser0cmz NotReady master 119s v1.18.8
[root@iZ2zeb5993exopp7ser0cmZ ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-mbwcx 0/1 Pending 0 2m17s
coredns-7ff77c879f-pxtj7 0/1 Pending 0 2m17s
etcd-iz2zeb5993exopp7ser0cmz 1/1 Running 0 2m27s
kube-apiserver-iz2zeb5993exopp7ser0cmz 1/1 Running 0 2m27s
kube-controller-manager-iz2zeb5993exopp7ser0cmz 1/1 Running 0 2m27s
kube-proxy-srjc5 1/1 Running 0 2m17s
kube-scheduler-iz2zeb5993exopp7ser0cmz 1/1 Running 0 2m27s
node为NotReady,因为还没有部署网络插件
安装calico网络插件
curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -o calico.yaml
编辑calico.yaml
修改后
- name: CALICO_IPV4POOL_IPIP
value: "Never"
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
创建calico插件,建议使用阿里云镜像加速器
kubectl create -f calico.yaml
node为Ready
[root@iZ2zeb5993exopp7ser0cmZ ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
iz2zeb5993exopp7ser0cmz Ready master 21m v1.18.8
[root@iZ2zeb5993exopp7ser0cmZ ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6f8bf57ff-tvmnx 1/1 Running 0 43s
calico-node-p29c8 1/1 Running 0 43s
coredns-7ff77c879f-mbwcx 1/1 Running 0 21m
coredns-7ff77c879f-pxtj7 1/1 Running 0 21m
etcd-iz2zeb5993exopp7ser0cmz 1/1 Running 0 21m
kube-apiserver-iz2zeb5993exopp7ser0cmz 1/1 Running 0 21m
kube-controller-manager-iz2zeb5993exopp7ser0cmz 1/1 Running 0 21m
kube-proxy-srjc5 1/1 Running 0 21m
kube-scheduler-iz2zeb5993exopp7ser0cmz 1/1 Running 0 21m
打包证书
cd /etc/kubernetes && tar cvzf k8s-key.tgz pki/ca.* pki/sa.* pki/front-proxy-ca.* pki/etcd/ca.*
部署第二个master节点(第三个一样)
解压证书
tar -zxvf k8s-key.tgz -C /etc/kubernetes/
加入节点
kubeadm join 172.22.207.108:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:8ee560a10e7e6d5476f7a2962dabb387088523a56799461bb444a4b09dc4eaad --control-plane
加入node节点(无需复制证书)
kubeadm join 172.22.207.108:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:8ee560a10e7e6d5476f7a2962dabb387088523a56799461bb444a4b09dc4eaad
查看节点状态
[root@iZ2zeb5993exopp7ser0cmZ kubernetes]# kubectl get node
NAME STATUS ROLES AGE VERSION
iz2zeb5993exopp7ser0cjz Ready master 3m42s v1.18.8
iz2zeb5993exopp7ser0ckz Ready 77s v1.18.8
iz2zeb5993exopp7ser0clz Ready master 2m28s v1.18.8
iz2zeb5993exopp7ser0cmz Ready master 55m v1.18.8
[root@iZ2zeb5993exopp7ser0cmZ kubernetes]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6f8bf57ff-tvmnx 1/1 Running 0 34m
calico-node-p29c8 1/1 Running 0 34m
calico-node-pwvpg 1/1 Running 0 2m29s
calico-node-xr67w 1/1 Running 0 78s
calico-node-zwdkf 1/1 Running 0 3m43s
coredns-7ff77c879f-mbwcx 1/1 Running 0 55m
coredns-7ff77c879f-pxtj7 1/1 Running 0 55m
etcd-iz2zeb5993exopp7ser0cjz 1/1 Running 0 3m41s
etcd-iz2zeb5993exopp7ser0clz 1/1 Running 0 2m27s
etcd-iz2zeb5993exopp7ser0cmz 1/1 Running 0 55m
kube-apiserver-iz2zeb5993exopp7ser0cjz 1/1 Running 0 3m42s
kube-apiserver-iz2zeb5993exopp7ser0clz 1/1 Running 0 2m28s
kube-apiserver-iz2zeb5993exopp7ser0cmz 1/1 Running 0 55m
kube-controller-manager-iz2zeb5993exopp7ser0cjz 1/1 Running 0 3m41s
kube-controller-manager-iz2zeb5993exopp7ser0clz 1/1 Running 0 2m27s
kube-controller-manager-iz2zeb5993exopp7ser0cmz 1/1 Running 0 55m
kube-proxy-4dpqk 1/1 Running 0 78s
kube-proxy-srjc5 1/1 Running 0 55m
kube-proxy-tkdpt 1/1 Running 0 3m43s
kube-proxy-zzb6p 1/1 Running 0 2m29s
kube-scheduler-iz2zeb5993exopp7ser0cjz 1/1 Running 0 3m41s
kube-scheduler-iz2zeb5993exopp7ser0clz 1/1 Running 0 2m27s
kube-scheduler-iz2zeb5993exopp7ser0cmz 1/1 Running 0 55m