硬件准备
【2台hosts内容一样】
[root@kuber-node1 /]# cat /etc/hosts
127.0.0.1 localhost
10.26.3.182 kuber-node1
10.26.3.184 kuber-master
前期准备
1、关闭防火墙
2、关闭selinux
3、创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
执行命令使修改生效。
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
4、kube-proxy开启ipvs的前置条件 【master和node都需要执行】
cat > /etc/sysconfig/modules/ipvs.modules <
5、安装Docker【master和node都需要执行】
安装docker的yum源:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum install -y --setopt=obsoletes=0 docker-ce-18.06.1.ce-3.el7
systemctl start docker
systemctl enable docker
安装工作
使用kubeadm部署Kubernetes 【master和node都需要执行】
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubelet kubeadm klubectl 【master和node都需要执行】
yum makecache fast
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet.service
主节点初始化
初始化的时候,需要×××下载镜像,可以使用下面脚本下载镜像
echo ""
echo "=========================================================="
echo "Pull Kubernetes v1.14.1 Images from aliyuncs.com ......"
echo "=========================================================="
echo ""
MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/openthings
## 拉取镜像
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.14.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.14.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.14.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.14.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.10
docker pull ${MY_REGISTRY}/k8s-gcr-io-pause:3.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1
## 添加Tag
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag ${MY_REGISTRY}/k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
echo ""
echo "=========================================================="
echo "Pull Kubernetes v1.14.1 Images FINISHED."
echo "into registry.cn-hangzhou.aliyuncs.com/openthings, "
echo " by openthings@https://my.oschina.net/u/2306127."
echo "=========================================================="
echo ""
初始化主节点kubeadm init --kubernetes-version=v1.14.1 --apiserver-advertise-address=10.26.3.184 --pod-network-cidr=10.244.0.0/16
集群初始化会输出以下内容,请把输出结果保存到一个文件中。
……………………………………
certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.26.3.218 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.26.3.218 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.26.3.218]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
………………………………
Your Kubernetes control-plane has initialized successfully!
**To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at**:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.26.3.184:6443
……………………………………………………
上面日志注意加粗字体部分。,这里提示如果需要使用集群,这些需要操作一下。
完整的这里需要输入命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
echo export KUBECONFIG=~/.kube/config>> ~/.bashrc
source ~/.bashrc
上面的命令输入完成之后,可以输入以下命令,查看集群状态;
[root@master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
结果显示均为"Healthy ok",那就是ok了。
安装 flannel网络
可以使用以下百度云盘的kube-flannel.yml
链接:https://pan.baidu.com/s/1qqNuWOEeNu_k3HDacNJqcw
提取码:b0dh
下载完了上面的kube-flannel.yml.
我们可以看看现在的pod情况。
现在发现coredns的相关pod是Pending.这是因为我们还没有安装好flannel的网络。
通过下面的命令安装网络。kubectl apply -f kube-flannel.yml
再次查看下pod的状态
在测试环境中,主节点同时作为工作节点,参与负载。
[root@master k8s]# kubectl describe node master | grep Taint
Taints: node-role.kubernetes.io/master:NoSchedule
修改Taints
kubectl taint nodes master node-role.kubernetes.io/master-
node "master" untainted
主节点同样可以作为工作节点参与负载了。
向集群中添加新的工作节点
我们现在要把kuber-node1 添加进来了。
在kuber-node1上输入以下命令
[root@kuber-node1 ~]# kubeadm join 10.26.3.184:6443 --token xzdulx.2816y0sgsjct7hpx \
> --discovery-token-ca-cert-hash sha256:b4dfcd585813b37a0e00bb4c06700936bd1937a2174df13564f1d33d2a0d99f3
输出一堆的日志
……………………
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
- Certificate signing request was sent to apiserver and a response was received.
- The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
…………………………………………………………………………………………
回到主节点master上,输入命令
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kuber-node1 NotReady 15m v1.14.1
master Ready master 3h14m v1.14.1
kuber-ndoe1节点居然15分钟过去了,状态还是NotReady。这不科学了。
因为当我们在节点行使用kubeadm join的之后,节点将需要从镜像仓库下载镜像。而节点上没有设置×××,所以,我们还继续使用上面提到的脚本,先从国内阿里云下载镜像到本地。脚本如上,复制过来,添加权限,执行下载。
整个过程10分钟左右。
在节点上查看容器运行状态。
如果我们在节点上看到以下容器运行,那就是正常了。
[root@kuber-node1 k8s]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e2c8291c97d7 ff281650a721 "/opt/bin/flanneld -…" 2 minutes ago Up 2 minutes k8s_kube-flannel_kube-flannel-ds-amd64-2x92f_kube-system_aa1a36fc-600b-11e9-a4c2-000c29f28dd5_0
efc79ff45ebc 20a2d7035165 "/usr/local/bin/kube…" 5 minutes ago Up 5 minutes k8s_kube-proxy_kube-proxy-hxlgp_kube-system_aa1a43d3-600b-11e9-a4c2-000c29f28dd5_0
031f4419d1b2 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-proxy-hxlgp_kube-system_aa1a43d3-600b-11e9-a4c2-000c29f28dd5_0
ed0900b33572 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-flannel-ds-amd64-2x92f_kube-system_aa1a36fc-600b-11e9-a4c2-000c29f28dd5_0
容器运行之后约10分钟,再回到主节点查看node状态
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kuber-node1 Ready 28m v1.14.1
master Ready master 3h26m v1.14.1
这就对了。
从集群中移除节点
如果需要从集群中移除kuber-node1这个Node执行下面的命令:
在master节点上执行:
kubectl drain kuber-node1 --delete-local-data --force --ignore-daemonsets
kubectl delete node kuber-node1
在kuber-ndoe1上执行:
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
在master上执行:kubectl delete node kuber-node1
测试集群
在master节点输入以下命令
[root@master k8s]# kubectl create deployment nginx --image=nginx:alpine
deployment.apps/nginx created
[root@master k8s]# kubectl scale deployment nginx --replicas=3
deployment.extensions/nginx scaled
这样就创建了3个pod。
输入以下命令查看pod状态
[root@master k8s]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-77595c695-4t7b2 0/1 ContainerCreating 0 7s kuber-node1
nginx-77595c695-qd248 0/1 ContainerCreating 0 7s master
nginx-77595c695-t2jdq 0/1 ContainerCreating 0 14s kuber-node1
分别在master和kuber-node1上创建了pod,状态是ContainerCreating,因为正在现在拉取nginx镜像。过一段时间我们再输入命令查看下pod状态
[root@master k8s]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-77595c695-4t7b2 1/1 Running 0 4m23s 10.244.1.4 kuber-node1
nginx-77595c695-qd248 0/1 ContainerCreating 0 4m23s master
nginx-77595c695-t2jdq 1/1 Running
kuber-node1上分布的2个节点状态是Running,已经成功了。主节点的还在创建中,我们可以查看主节点的这个pod的状态过程
[root@master k8s]# kubectl describe pod nginx-77595c695-qd248
……………………………………
Restart Count: 0
Environment:
Mounts:
………………………………………………………………
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m3s default-scheduler Successfully assigned default/nginx-77595c695-qd248 to master
Normal Pulling 5m3s kubelet, master Pulling image "nginx:alpine"
最后一行,提示正在Pulling中,暂时就不管它了。
添加新的节点
添加前的准备工作:新节点需要将上面内容提及到的节点内容必须安装好。比如主节名添加到主节点和其他node节点hosts。一些配置文件。下载镜像的脚本之类的。
1、如果新节点添加时候时间间隔较长,主节点加入token失效。通过下面命令获取到新的token
[root@master k8s]# kubeadm token create --print-join-command
kubeadm join 10.26.3.184:6443 --token 25bapi.ip4h9l4xipogzqu7 --discovery-token-ca-cert-hash sha256:b4dfcd585813b37a0e00bb4c06700936bd1937a2174df13564f1d33d2a0d99f3
2、在需要添加进集群的节点上执行上面输出的命令
[root@kuber-node2 ~]# kubeadm join 10.26.3.184:6443 --token 25bapi.ip4h9l4xipogzqu7 --discovery-token-ca-cert-hash sha256:b4dfcd585813b37a0e00bb4c06700936bd1937a2174df13564f1d33d2a0d99f3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
3、过一段时间,查看主节点信息
[root@master k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
kuber-node1 Ready 9d v1.14.1
kuber-node2 Ready 10m v1.14.1
master Ready master 9d v1.14.1
kuber-node2就是新添加的节点了。