平时测试点小东西我用minikube就弄了,但最近要做EFK的部署,需要多节点,因此记录一下安装k8s多节点的过程。
kubeadm是Kubernetes官方提供的用于快速部署Kubernetes集群的工具,本次使用kubeadm部署包含1个master节点及2个node节点的k8s集群。本文是参考https://blog.csdn.net/networken/article/details/84991940 这篇博文部署而成。因为我的设置过科学上网,所以下边的安装很多并没有用阿里云的地址而是直接走的官方地址,如果大家未设置科学上网建议按上边的链接部署。
主机名 | IP | 角色 | OS | 组件 | 配置 |
node1 | 10.53.5.94 | master | Ubuntu 16.04.1 | kube-apiserver kube-controller-manager kube-scheduler kube-proxy etcd coredns calico |
8核16G内存 |
node2 | 10.53.6.185 | node | Ubuntu 16.04.1 | kube-proxy calico |
8核16G内存 |
node3 | 10.53.5.94 | node | Ubuntu 16.04.1 | kube-proxy calico |
8核16G内存 |
注明:无特别说明,以下步骤需要在各个主机配置
1、基本设置
#各个节点配置主机名
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3
#配置hosts解析
cat >> /etc/hosts << EOF
10.53.5.94 node1
10.53.6.185 node2
10.53.7.37 node3
EOF
#关闭swap
sed -i '/swap/d' /etc/fstab
swapoff -a
#查看swap分区是否开启
swapon -a
#确认时间同步
sudo apt install -y chrony
systemctl enable --now chronyd
chronyc sources && timedatectl
2、加载ipvs模块
kuber-proxy代理支持iptables和ipvs两种模式,使用ipvs模式需要在初始化集群前加载要求的ipvs模块并安装ipset工具。另外,针对Linux kernel 4.19以上的内核版本使用nf_conntrack 代替nf_conntrack_ipv4。参考
cat > /etc/modules-load.d/ipvs.conf <
3、安装docker
直接参考官方文档:https://kubernetes.io/docs/setup/production-environment/container-runtimes/
$ sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common gnupg2
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ sudo apt-get update && sudo apt-get install -y containerd.io=1.2.10-3 docker-ce=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) docker-ce-cli=5:19.03.4~3-0~ubuntu-$(lsb_release -cs)
$ sudo touch /etc/docker/daemon.json
$ sudo vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
$ sudo mkdir -p /etc/systemd/system/docker.service.d
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
4、安装kubeadm、kubelet、kubectl
官方文档:https://kubernetes.io/docs/setup/independent/install-kubeadm/
$ sudo apt-get install -y iptables arptables ebtables
$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
#编辑文件,添加文件的内容:deb https://apt.kubernetes.io/ kubernetes-xenial main
$ sudo vi /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
# 配置内核参数
$ sudo vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
$ sudo sysctl --system
1、初始化主节点
官方参考:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
在master节点运行以下命令初始化master节点:
$ sudo kubeadm init --apiserver-advertise-address=10.53.5.94 --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=192.168.0.0/16
初始化命令说明:
注意:安装过程会持续几分钟,并且在开始会有两个⚠,那是因为
W0310 10:22:10.392906 23530 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0310 10:22:10.393012 23530 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
......
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.53.5.94:6443 --token 1wtkoi.6pp22vg8wyq1xm7v \
--discovery-token-ca-cert-hash sha256:e50885161cd99e82c19634b6be816be21e4e4525b8a9b19e923b5038b9291173
2、配置 kubectl
kubectl 是管理 Kubernetes Cluster 的命令行工具, Master 初始化完成后需要做一些配置工作才能使用kubectl,参考初始化结果给出的命令进行以下配置:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
#验证一下
$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-9d85f5447-lmjbg 0/1 Pending 0 5m5s
kube-system coredns-9d85f5447-xjmhg 0/1 Pending 0 5m5s
kube-system etcd-node1 1/1 Running 0 5m21s
kube-system kube-apiserver-node1 1/1 Running 0 5m21s
kube-system kube-controller-manager-node1 1/1 Running 0 5m20s
kube-system kube-proxy-v2jw4 1/1 Running 0 5m5s
kube-system kube-scheduler-node1 1/1 Running 0 5m20s
3、部署网络插件
参考:https://github.com/containernetworking/cni
必须安装pod网络插件,以便pod之间可以相互通信,必须在任何应用程序之前部署网络,CoreDNS不会在安装网络插件之前启动。
官方文档参考:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
https://docs.projectcalico.org/v3.10/getting-started/kubernetes/
为使calico正常工作,你需要传递–pod-network-cidr=192.168.0.0/16到kubeadm init或更新calico.yml文件,以与您的pod网络相匹配。
kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
注:如果安装flannel网络插件,必须通过kubeadm init配置–pod-network-cidr=10.244.0.0/16参数。
4、 验证网络插件
安装了pod网络后,确认coredns以及其他pod全部运行正常,查看master节点状态为Ready
$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-9d85f5447-lmjbg 1/1 Running 0 5m5s
kube-system coredns-9d85f5447-xjmhg 1/1 Running 0 5m5s
kube-system etcd-node1 1/1 Running 0 5m21s
kube-system kube-apiserver-node1 1/1 Running 0 5m21s
kube-system kube-controller-manager-node1 1/1 Running 0 5m20s
kube-system kube-proxy-v2jw4 1/1 Running 0 5m5s
kube-system kube-scheduler-node1 1/1 Running 0 5m20s
至此,Kubernetes 的 Master 节点就部署完成了。如果只需要一个单节点的 Kubernetes,现在你就可以使用了。
在两个node节点上都执行下边的命令,将其注册到 Cluster 中:
$ kubeadm join 10.53.5.94:6443 --token 1wtkoi.6pp22vg8wyq1xm7v \
--discovery-token-ca-cert-hash sha256:e50885161cd99e82c19634b6be816be21e4e4525b8a9b19e923b5038b9291173
#如果执行kubeadm init时没有记录下加入集群的命令,可以通过以下命令重新创建
$ kubeadm token create --print-join-command
修改kube-proxy的configmap,在config.conf中找到mode参数,改为mode: "ipvs"然后保存:
kubectl -n kube-system get cm kube-proxy -o yaml | sed 's/mode: ""/mode: "ipvs"/g' | kubectl replace -f -
#或者手动修改
kubectl -n kube-system edit cm kube-proxy
kubectl -n kube-system get cm kube-proxy -o yaml | grep mode
mode: "ipvs"
#重启kube-proxy pod
kubectl -n kube-system delete pods -l k8s-app=kube-proxy
#确认ipvs模式开启成功
[root@kmaster ~]# kubectl -n kube-system logs -f -l k8s-app=kube-proxy | grep ipvs
I1026 04:11:46.474911 1 server_others.go:176] Using ipvs Proxier.
I1026 04:11:42.842141 1 server_others.go:176] Using ipvs Proxier.
I1026 04:11:46.198116 1 server_others.go:176] Using ipvs Proxier.
日志中打印出Using ipvs Proxier,说明ipvs模式已经开启。
1、在官网下载指定想要版本的helm并传入服务器:https://github.com/helm/helm/releases
tar zxvf helm-xxxxx-linux-amd64.tar.gz
mv liniux-amd64/helm /usr/local/bin
helm version #查看helm client版本
2、创建rbac-config.yaml,并输入以下内容:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
3、然后执行kubectl create -f rbac-config.yaml
4、以上步骤配置成功后,安装tiller。(和helm client对应的版本一样)
# google源
helm init --service-account tiller --upgrade -i gcr.io/kubernetes-helm/tiller:v2.11.0
# 阿里源
helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.11.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
参数--stable-repo-url
用于拉取charts所在源的位置,如果不设置则默认访问官方charts
注明:对于 Kubernetes v1.16.0 以上的版本,有可能会碰到 Error: error installing: the server could not find the requested resource 的错误。这是由于 extensions/v1beta1 已经被 apps/v1 替代。相信在2.15 或者 3 版本发布之后, 应该就不会遇到这个问题了。还是生态比较慢的原因。
解决方法是使用如下语句安装:
helm init -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3 --stable-repo-url http://mirror.azure.cn/kubernetes/charts/ --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -
想要撤销kubeadm执行的操作,首先要排除节点,并确保该节点为空, 然后再将其关闭。
1、在Master节点上运行:
$ kubectl drain --delete-local-data --force --ignore-daemonsets
$ kubectl delete node
2、然后在需要移除的节点上,重置kubeadm的安装状态:
$ kubeadm reset -f
3、在每个节点上都执行清理命令
$ kubeadm reset
$ sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
$ sudo apt-get autoremove
$ sudo rm -rf ~/.kube
$ sudo rm -rf /etc/kubernetes
由于安装的1.17.3版本太高了,做的es项目依赖的api没那么高版本,所以降版本,指定版本1.15.3-00
$ sudo apt-get install -y kubelet=1.15.3-00 kubectl=1.15.3-00 kubeadm=1.15.3-00
下边的步骤就是把第三、四、五、六、七再走一遍,注意版本,docker版本会报警告但影响不大。
参考文章1:https://blog.csdn.net/networken/article/details/84991940