部署k8s集群的节点按照用途可以划分为如下3类角色:
单个节点可以对应多个角色,比如master节点可以同时作为init节点。因为slave节点不是必须存在,因此部署非高可用集群最少仅需一台机器即可,基础配置不低于2C4G。
本例为了演示slave节点的添加,会部署一台master+1台slave,其中master和init共用一台机器,节点规划如下:
主机名 | 节点ip | 角色 | 部署组件 |
---|---|---|---|
k8s-init | 10.0.129.84 | init | registry, httpd |
k8s-slave | 10.0.128.240 | slave | kubectl, kubeadm, kubelet, kube-proxy, flannel |
k8s-master | 10.0.129.84 | master | etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel |
组件 | 版本 | 说明 |
---|---|---|
CentOS | 7.5.1804 | |
Kernel | Linux 3.10.0-862.el7.x86_64 | |
etcd | 3.2.24 | 使用容器方式部署,默认数据挂载到本地路径 |
coredns | 1.2.6 | |
kubeadm | v1.13.3 | |
kubectl | v1.13.3 | |
kubelet | v1.13.3 | |
kube-proxy | v1.13.3 | |
flannel | v0.11.0 | 使用vxlan作为backend |
httpd | v2.4.6 | 部署在init节点,默认使用80端口提供服务 |
registry | v2.3.1 | 部署在init节点,默认使用60080端口提供服务 |
操作节点:所有节点( k8s-init,k8s-master,k8s-slave
)均需执行 - 修改hostname hostname必须只能包含小写字母、数字、",“、”-“,且开头结尾必须是小写字母或数字
# 在master节点
$ hostnamectl set-hostname k8s-master #设置master节点的hostname
# 在slave节点
$ hostnamectl set-hostname k8s-slave #设置slave节点的hostname
$ cat >>/etc/hosts<
操作节点: 所有的master和slave节点( k8s-master,k8s-slave
)需要执行
本章下述操作均以k8s-master为例,其他节点均是相同的操作(ip和hostname的值换成对应机器的真实值)
设置安全组开放端口 如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通: k8s-init节点:TCP:7443,60080,60081,UDP协议端口全部打开 k8s-master节点:TCP:6443,2379,2380,UDP协议端口全部打开
设置iptables
$ iptables -P FORWARD ACCEPT
$ swapoff -a
# 防止开机自动挂载 swap 分区
$ sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
$ sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
$ setenforce 0
$ systemctl disable firewalld && systemctl stop firewalld
$ cat < /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
EOF
$ modprobe br_netfilter
$ sysctl -p /etc/sysctl.d/k8s.conf
$ cat > /etc/sysconfig/modules/ipvs.modules <
操作节点: k8s-init
节点
# 安装包拷贝到k8s-init节点的/opt目录
$ scp k8s-installer.tar.gz [email protected]:/opt
# 解压并查看安装包
$ tar -zxf /opt/k8s-installer.tar.gz -C /opt
$ ls -lh /opt/k8s-installer # 查看安装包,会包含如下4项
total 337M
drwxr-xr-x 3 root root 4.0K Jun 16 21:00 docker-ce
-rw-r--r-- 1 root root 13K Jun 16 14:00 kube-flannel.yml
drwxr-xr-x 3 root root 4.0K Jun 15 15:19 registry
-rw------- 1 root root 337M Jun 16 10:24 registry-image.tar
操作节点: k8s-init
$ cat < /etc/yum.repos.d/local.repo
[local]
name=local
baseurl=file:///opt/k8s-installer/docker-ce
gpgcheck=0
enabled=1
EOF
$ yum clean all && yum makecache
$ yum install -y httpd --disablerepo=* --enablerepo=local
httpd默认使用80端口,为避免端口冲突,默认修改为60081端口
$ sed -i 's/Listen 80/Listen 60081/g' /etc/httpd/conf/httpd.conf
将安装包拷贝到服务目录中,服务目录默认使用 /var/www/html
,
$ cp -r /opt/k8s-installer/docker-ce/ /var/www/html/
$ systemctl enable httpd && systemctl start httpd
操作节点: 所有节点( k8s-init,k8s-master,k8s-slave
)均需执行 - 配置yum repo 其中60081端口若有修改,需要替换为httpd实际使用的端口
$ cat < /etc/yum.repos.d/local-http.repo
[local-http]
name=local-http
baseurl=http://10.0.129.84:60081/docker-ce
gpgcheck=0
enabled=1
EOF
$ yum clean all && yum makecache
$ mkdir /etc/docker
$ cat < /etc/docker/daemon.json
{
"insecure-registries": [
"10.0.129.84:60080"
],
"storage-driver": "overlay2"
}
EOF
$ yum install -y docker-ce docker-ce-cli containerd.io --disablerepo=* --enablerepo=local-http
$ systemctl enable docker && systemctl start docker
该仓库存储k8s部署所需的
kube-apiserver
、kube-controller-manager
、kube-scheduler
、etcd
、flannel
、coredns
等组件的镜像,使用docker run的方式部署,默认暴漏机器的60080
端口提供服务。
操作节点: 只在 k8s-init
节点执行 - 加载镜像到本地
$ docker load -i /opt/k8s-installer/registry-image.tar
$ docker images # 查看加载成功的registry镜像
REPOSITORY TAG IMAGE ID CREATED SIZE
index.alauda.cn/alaudaorg/distribution latest 2aee66f2203d 2 years ago 347MB
60080
作为registry对外的服务端口,如需修改,需将各节点的 /etc/docker/daemon.json
中的 insecure-registries
中配置的端口一并改掉 $ docker run -d --restart=always --name pkg-registry -p 60080:5000 -v /opt/k8s-installer/registry/:/var/lib/registry index.alauda.cn/alaudaorg/distribution:latest
操作节点: 所有的master和slave节点( k8s-master,k8s-slave
) 需要执行
$ yum install -y kubeadm kubectl kubelet --disablerepo=* --enablerepo=local-http
操作节点: 所有的master和slave节点( k8s-master,k8s-slave
) 需要执行 - 设置kubelet开机启动
$ systemctl enable kubelet
/etc/systemd/system/kubelet.service
,注意需要将–pod-infra-container-image地址设置为实际的镜像仓库地址(默认是k8s-init机器ip:60080) $ cat < /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
[Service]
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_INFRA_CONTAINER_IMAGE=--pod-infra-container-image=10.0.129.84:60080/k8s/pause:3.1"
ExecStart=/usr/bin/kubelet $KUBELET_SYSTEM_PODS_ARGS $KUBELET_INFRA_CONTAINER_IMAGE
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
操作节点:只在master节点( k8s-master
)执行 需要修改如下两处: - advertiseAddress:修改为 k8s-master
的内网ip地址 - imageRepository:修改为 k8s-init
的内网ip地址
$ cat < /opt/kubeadm.conf
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.0.129.84
bindPort: 6443
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: 10.0.129.84:60080/k8s
kind: ClusterConfiguration
kubernetesVersion: v1.13.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}
EOF
操作节点:只在master节点( k8s-master
)执行
# 查看需要使用的镜像列表,若无问题,将得到如下列表
$ kubeadm config images list --config /opt/kubeadm.conf
10.0.129.84:60080/k8s/kube-apiserver:v1.13.3
10.0.129.84:60080/k8s/kube-controller-manager:v1.13.3
10.0.129.84:60080/k8s/kube-scheduler:v1.13.3
10.0.129.84:60080/k8s/kube-proxy:v1.13.3
10.0.129.84:60080/k8s/pause:3.1
10.0.129.84:60080/k8s/etcd:3.2.24
10.0.129.84:60080/k8s/coredns:1.2.6
# 提前下载镜像到本地
$ kubeadm config images pull --config /opt/kubeadm.conf
操作节点:只在master节点( k8s-master
)执行
kubeadm init --config /opt/kubeadm.conf
若初始化成功后,最后会提示如下信息:
...
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.0.129.84:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:6bb7e2646f1f846efddf2525c012505b76831ff9453329d0203d010814783a51
接下来按照上述提示信息操作,配置kubectl客户端的认证
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件
6. 添加slave节点到集群中
操作节点:所有的slave节点(
k8s-slave
)需要执行 在每台slave节点,执行如下命令,该命令是在kubeadm init成功后提示信息中打印出来的,需要替换成实际init后打印出的命令。
$ kubeadm join 10.0.129.84:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:6bb7e2646f1f846efddf2525c012505b76831ff9453329d0203d010814783a51
操作节点:只在master节点( k8s-master
)执行 - 拷贝kube-flannel.yml文件 把kube-flannel.yml拷贝到master节点的/opt目录
$ cp /opt/k8s-installer/kube-flannel.yml /opt
⚠️注意:如果k8s-master和k8s-init节点不是同一台机器,需要把kube-flannel.yml从k8s-init节点远程拷贝到master节点的/opt目录 $ scp [email protected]:/opt/k8s-installer/kube-flannel.yml /opt
$ sed -i "s#quay.io/coreos#10.0.129.84:60080/k8s#g" /opt/kube-flannel.yml
若配置kubeadm初始化文件章节中,podSubnet使用了非10.244.0.0/16的值,则需要对应的修改kube-flannel.yml文件中如下部分,保持一致即可,否则会造成flannel无法启动。
125 net-conf.json: |
126 {
127 "Network": "10.244.0.0/16",
128 "Backend": {
129 "Type": "vxlan"
130 }
131 }
$ kubectl create -f /opt/kube-flannel.yml
操作节点: k8s-master
默认部署成功后,master节点无法调度业务pod,如需设置master节点也可以参与pod的调度,需执行:
$ kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-
操作节点: 在master节点( k8s-master
)执行
$ kubectl get nodes #观察集群节点是否全部Ready
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 22h v1.13.3
k8s-slave Ready 22h v1.13.3
创建测试nginx服务,需要将10.0.129.84替换为实际k8s-init节点的ip地址
$ kubectl run test-nginx --image=10.0.129.84:60080/k8s/nginx
查看pod是否创建成功,并访问pod ip测试是否可用
$ kubectl get po -o wide |grep test-nginx
test-nginx-7d65ddddc9-lcg9z 1/1 Running 0 12s 10.244.1.3 k8s-slave
$ curl 10.244.1.3 # 验证是否服务可通
...
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
...