最近在整理回顾k8s相关的知识,实操手动去搭建一个简易的k8s集群环境(作者本人搭建的是k8s集群version v1.23.3),花了挺多时间,踩了不少坑。不过最终还是把环境跑起来了,大家按照我的方式去搭,绝对不会错。
关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
关闭selinux
sed -i s/SELINUX=enforcing/SELINUX=disabled/ /etc/selinux/config
setenforce 0
getenforce
在三台机器上分别执行
hostnamectl set-hostname worker
hostnamectl set-hostname master
hostnamectl set-hostname console
然后exit,重新登录机器就能看到效果了
从第四步开始,如无特殊说明,以下操作都是针对worker和master节点
4. 安装配置Docker
安装节点为worker和master
yum install -y yum-utils
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin
启动docker
systemctl start docker
systemctl enable docker
测试docker是否正常运行
docker run hello-world
完成上面步骤后,还需要对docker的配置做一些修改,对/etc/docker/daemon.json文件中的cgroupdriver改成systemd。最后保存重启即可(没有这个文件,就在对应路径里创建即可)。
cat <
cat <
sudo swapoff -a
sudo sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab
# 添加yum源
cat <
完成安装后,我们可以检查下是不是对应的版本
kubeadm version
kubectl version short
然后为了保证k8s环境稳定,我们最好锁定这三个应用的版本,这里会用到yum-plugin-versionlock
# 安装
yum install -y yum-plugin-versionlock
# 锁定软件包
yum versionlock add kubeadm kubectl kubelet
# 查看锁定列表
yum versionlock list
这里能看到我们需要安装的k8s集群组件都有哪些镜像
kubeadm config images list --kubernetes-version v1.23.3
在worker和master上执行下面的shell脚本即可
repo=registry.aliyuncs.com/google_containers
for name in `kubeadm config images list --kubernetes-version v1.23.3`; do
src_name=${name#k8s.gcr.io/}
src_name=${src_name#coredns/}
docker pull $repo/$src_name
docker tag $repo/$src_name $name
docker rmi $repo/$src_name
done
kubeadm init \
--apiserver-advertise-address=192.168.127.147 \
--kubernetes-version v1.23.3 \
--pod-network-cidr=10.10.0.0/16
–pod-network-cidr,设置集群里 Pod 的 IP 地址段
–apiserver-advertise-address,设置 apiserver 的 IP 地址,指向master虚拟机的IP地址
–kubernetes-version,设置k8s的版本号
上述步骤完成后,我们还需要按照提示操作以下步骤
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
因为是root用户,简化一下就是
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
还有一行配置是kubeadm join的提示,这条信息是工作节点加入集群需要用到的,里面的token具有时效性(24h),过期就要重新生成了。
kubeadm join 192.168.127.147:6443 --token nb8ydn.uyu1elj2146qhjg5 --discovery-token-ca-cert-hash sha256:1257ab57475d4d643ac81864827e8465e41d77e9abf38cd91e06292bd38a46c8
如果过期了,这里有重新生成token的指令
kubeadm token create --print-join-command
最后一步就是安装Flannel网络插件,保证集群内部的网络正常运作。
我们可以在Flannel的 GitHub 仓库里(https://github.com/flannel-io/flannel/)找到kube-flannel.yml文档,修改net-conf.json为上面指定的集群内部IP网段即可,修改好后的文档我就贴在文章中了。
kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.10.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
然后咱们在集群中安装下就可以了
# 安装Flannel网络插件
kubectl apply -f kube-flannel.yml
# 通过 kubectl get node -w,实时查看master节点过会儿从NotReady变成Ready状态
[root@master k8s-install]# kubectl get node -w
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 109m v1.23.3
worker节点为了节省资源,我们需要的组件只有proxy、coredns、pause,其他的可以直接删了。
# 查看下载的镜像
docker images
# 删除镜像命令
docker rmi
然后执行上面那条kubeadm join的指令就可以了
kubeadm join 192.168.127.147:6443 --token nb8ydn.uyu1elj2146qhjg5 --discovery-token-ca-cert-hash sha256:1257ab57475d4d643ac81864827e8465e41d77e9abf38cd91e06292bd38a46c8
在master节点上,我们可以通过kubectl get node来查看现在集群中有几个节点
[root@master k8s-install]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 109m v1.23.3
worker Ready 70m v1.23.3
Console 节点的部署工作更加简单,它只需要安装一个 kubectl,然后复制“config”文件就行,直接在 Master 节点上用“scp”远程拷贝。
scp `which kubectl` [email protected]:~/
# console节点的.kube目录要先创建一下,mkdir -p $HOME/.kube
scp ~/.kube/config [email protected]:~/.kube
然后在console节点上,把kubectl放在/usr/bin目录下即可
拷贝到PATH路径之下
cp kubectl /usr/bin/
# 查看集群状态
[root@console ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 117m v1.23.3
worker Ready 78m v1.23.3
到这一步,恭喜~我们的k8s集群就搭建完了!
喜欢的话,就点个收藏吧~~
参考:
https://blog.csdn.net/weixin_43918125/article/details/125951616
https://www.csdn.net/tags/NtTaYg3sMDU0MjQtYmxvZwO0O0OO0O0O.html
https://blog.csdn.net/omaidb/article/details/121549382
https://blog.51cto.com/u_15127640/4108989
https://www.cnblogs.com/PurpleRain98/p/14837802.html