kubeadm安装k8s系列(一):初始化安装环境
kubeadm安装k8s系列(二):docker以及kubeadm,kubelet,kubectl的安装
执行命令ip addr
如上图:192.168.100.0网段和172.17.0.0网段均被占用.我们选用一个跟他们不冲突的网段172.20.0.0/16
(或者其他网段也可以)作为init命令的pod-network-cidr
参数的值
选用的网段内的ip地址一定不能与已经存在的有冲突.否则后续必出错
kubernetes的某些镜像在国内访问不到,需要换种方式拉取,在这里使用阿里的镜像源(registry.cn-hangzhou.aliyuncs.com/google_containers)拉取下来,然后利用docker tag
命令修改即可
[root@master ~]# kubeadm config images list
W0517 05:00:48.849105 57199 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
[root@master ~]#
如上,执行kubeadm config images list
就能列出需要的镜像.
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
使用docker tag
命令修改,并使用docker rmi
命令移除原有的镜像
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2 k8s.gcr.io/kube-apiserver:v1.18.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
在master节点执行
kubeadm init --pod-network-cidr=172.20.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join 192.168.100.100:6443 --token qttsee.jh2i26zofvr5g7qu \
--discovery-token-ca-cert-hash sha256:a622dae274da5c61b6f926c2f2b0aa063dcd438ca4d1b88f026b12a5151c6e0c
执行完毕之后,可以使用命令kubectl get nodes
查看节点状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 4m10s v1.18.2
[root@master ~]#
如上,master节点处于NotReady
的状态,是因为还没有安装网络插件,我们使用calico
作为kuberntes的网络插件,当然也有其他选择,可查看官网.
在master节点执行
wget https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f calico.yaml
等待一会儿,执行如下命令
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-789f6df884-svwnr 1/1 Running 0 5m15s
calico-node-v5lxx 1/1 Running 0 5m15s
coredns-66bff467f8-6czgr 1/1 Running 0 14m
coredns-66bff467f8-wf2t2 1/1 Running 0 14m
etcd-master 1/1 Running 0 14m
kube-apiserver-master 1/1 Running 0 14m
kube-controller-manager-master 1/1 Running 0 14m
kube-proxy-zxpxc 1/1 Running 0 14m
kube-scheduler-master 1/1 Running 0 14m
[root@master ~]#
可以看到calico插件都已经是ready状态了
再次执行kubectl get nodes
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 16m v1.18.2
[root@master ~]#
可以看到主节点已经是ready状态了
分别在node1和node2虚拟机上述得到的kubeadm join
命令,将两个节点添加进集群.
再次执行
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 99m v1.18.2
node1 Ready <none> 12s v1.18.2
node2 Ready <none> 74m v1.18.2
[root@master ~]#
可以看到node1和node2两个节点已经添加进来了,等待一段时间之后就会变为ready
的状态
至此,kubernetes的基础组件已经安装完成,这已经是一个完整的kubernetes集群了.后续kubernetes-dashboard,prometheus,grafana等,本质就是在集群上安装的应用.后续的yaml文件比较复杂,想完全搞懂yaml的内容,需要一定的k8s基础.