K8S集群环境搭建
当下,K8S非常的火爆,因此特意记录下学习K8S的点点滴滴,以备不时之需。
环境准备
服务IP | 服务器角色 | 服务名 |
---|---|---|
192.168.2.152 | master | master |
192.168.2.182 | node1 | node1 |
服务器配置信息:
master:虚拟机安装的CentOS7,6核心,4G内存
node1:虚拟机安装的CentOS7,6核心,5G内存
在集群环境安装之前,先做准备工作
1:设置服务器名
[root@master ~]# hostnamectl set-hostname master
[root@node1 ~]# hostnamectl set-hostname node1
2:设置HOST
[root@master ~]# vi /etc/hosts
192.168.2.152 master
[root@node1 ~]# vi /etc/hosts
192.168.2.182 node1
3:时间同步
[root@master ~]# ntpdate cn.pool.ntp.org
[root@node1 ~]# ntpdate cn.pool.ntp.org
4:关闭防火墙/开启特定端口策略
[root@master ~]# systemctl disable firewalld
[root@master ~]# systemctl stop firewalld
[root@node1 ~]# systemctl disable firewalld
[root@node1 ~]# systemctl stop firewalld
5:禁用SELinux,让容器可以读取主机文件系统
[root@master ~]# setenforce 0
[root@node1 ~]# setenforce 0
以上准备工作完毕,下面是重头戏
6:因国内网络环境关系,需要配置一下yum源
[root@master ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repository
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
7:安装kubeadm和相关工具
[root@master ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
8:安装docker-ce
[root@master ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# yum install -y docker-ce
9:启动docker服务器和kubelet服务,并设置开机启动
[root@master ~]# systemctl enable docker && systemctl start docker
[root@master ~]# systemctl enable kubelet && systemctl start kubelet
10:检查docker服务和kubelet服务是否已成功启动
[root@master ~]# systemctl status docker
[root@master ~]# systemctl status kubelet
11:通过kubeadm获取初始化配置文件
[root@master ~]# kubeadm config print init-defaults > init.default.yaml
[root@master ~]# cp init.default.yaml init-config.yaml
[root@master ~]# vi init-config.yaml
advertiseAddress: 192.168.2.152
imageRepository: docker.io/dustise
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: "192.168.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}
其中:需要修改advertiseAddress为192.168.2.152是master的IP地址;imageRepository为国内镜像地址,这里为:docker.io/dustise;podSubnet修改为192.168.0.0/16
12:修改docker配置文件,以便可以从国内下载镜像文件
[root@master ~]# echo '{"registry-mirrors":["https://registry.docker-cn.com"]}' > /etc/docker/daemon.json
13:下载镜像文件
[root@master ~]# kubeadm config images pull --config=init-config.yaml
其中init-config.yaml是第11步得到的配置文件
14:安装master
[root@master ~]# kubeadm init --config=init-config.yaml
14.1:若出现下面的提示,则执行相应的命令
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
执行:
[root@master ~]# vi /etc/docker/daemon.json
{"registry-mirrors":["https://registry.docker-cn.com"], "exec-opts": ["native.cgroupdriver=systemd"]}
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
14.2:若出现下面的提示,则执行相应命令
[WARNING Hostname]: hostname "master" could not be reached
[WARNING Hostname]: hostname "master": lookup master on 192.168.2.1:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
执行:
[root@master ~]# vim /etc/hosts
192.168.2.152 master
14.3:若出现下面的提示,则执行相应的命令
[ERROR Swap]: running with swap on is not supported. Please disable swap
则执行:
[root@master ~]# swapoff -a
14.4:若出现下面的提示,则执行相应的命令
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
则执行:
[root@master ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
14.5:若出现下面的提示,则执行相应的命令
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
则执行:
[root@master ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
15:若出现下面的提示,则说明安装成功,注意保存相关信息,以备后用
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.152:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:ca762aaa8e777ee017c071e880cd442db3be677ced2368b2ff991744d949768c
16:按照提示,准备配置文件,以便系统重启时,自动启动集群
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
17:验证是否安装成功
[root@master ~]# kubectl get -n kube-system configmap
NAME DATA AGE
coredns 1 51m
extension-apiserver-authentication 6 52m
kube-proxy 2 51m
kubeadm-config 2 51m
kubelet-config-1.14 1 51m
至此,master节点已安装完成,网络环境配置请看后面说明。下面安装节点node1。
18:安装节点node1
请在node1节点服务器上重复步骤6~9。
19:使用kubeadm命令,创建Join配置文件
[root@node1 ~]# kubeadm config print join-defaults > join-config.yaml
[root@node1 ~]# vi join-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
bootstrapToken:
apiServerEndpoint: 192.168.2.152:6443
token: abcdef.0123456789abcdef
unsafeSkipCAVerification: true
timeout: 5m0s
tlsBootstrapToken: abcdef.0123456789abcdef
kind: JoinConfiguration
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: node1
其中:192.168.2.152:4663是master的IP和端口;token是master上创建的token,有效期24小时,若过期,需要在master上执行
[root@master ~]# kubeadm token create
20:执行kubeadm join命令,加入node
[root@master ~]# kubeadm join --config=join-config.yaml
20.1:若提示:
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://192.168.2.152:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: x509: certificate has expired or is not yet valid
检查当前tocken是否已过去,若未过期,则执行第3步,时间同步。否则执行:
[root@master ~]# kubeadm token create
检查token是否已过期,使用如下命令:
[root@master ~]# kubeadm token list
21:检查服务节点
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 112m v1.14.2
node1 NotReady 10s v1.14.2
因未安装网络插件,因此状态都是NotReady
22:安装网络插件(参考:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network)
安装weave插件
[root@master ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
23:稍等片刻,检查所有集群信息
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6897bd7b5-bszk4 1/1 Running 0 3h4m
kube-system coredns-6897bd7b5-qxjdc 1/1 Running 0 3h4m
kube-system etcd-master 1/1 Running 0 3h3m
kube-system kube-apiserver-master 1/1 Running 0 3h3m
kube-system kube-controller-manager-master 1/1 Running 0 3h3m
kube-system kube-proxy-8255f 1/1 Running 0 3h4m
kube-system kube-proxy-9chv4 1/1 Running 0 72m
kube-system kube-scheduler-master 1/1 Running 0 3h3m
kube-system weave-net-89ln6 2/2 Running 0 63m
kube-system weave-net-wr7pb 2/2 Running 0 63m
到此,集群环境已完成搭建。下一节K8S集群系列二:来个DEMO,将发布一个demo服务做一个小小的测试。