本文是 kubernets集群实验的部署操作。

以下是操作过程:

部署Kubernetes Master

【注意:此操作只在master机器上执行。定义POD的网段为: 10.244.0.0/16, api server指向的就是master机器IP地址。由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。】
root@localhost master]# kubeadm init \

--apiserver-advertise-address=192.168.20.195 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.15.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
【输出结果如下:】
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.20.195 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.20.195 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.20.195]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.017006 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 082on1.73j8mktzjrwyn2ag
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[kubelet-check] Initial timeout of 40s passed.
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.20.195:6443 --token dl2o54.j4oluk83qz1oq3jt \
--discovery-token-ca-cert-hash sha256:dcaa27862e75c5fef5d0c30bb0bfc2f97b1bec1211f2813e1e27975e63555131
[root@localhost master]#
【在此需要记录下 kubeadm join 的信息,并按照输出结果配置KubernetesCluster:
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.20.195:6443 --token dl2o54.j4oluk83qz1oq3jt \
--discovery-token-ca-cert-hash sha256:dcaa27862e75c5fef5d0c30bb0bfc2f97b1bec1211f2813e1e27975e63555131 】
[root@localhost master]# mkdir -p $HOME/.kube
[root@localhost master]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@localhost master]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@localhost master]#
##########################################################

添加Kubernetes Node

【此操作分别在work0机器和work1机器上执行】
[root@kubernetes-node00 work0]# kubeadm join 192.168.20.195:6443 --token dl2o54.j4oluk83qz1oq3jt \

--discovery-token-ca-cert-hash sha256:dcaa27862e75c5fef5d0c30bb0bfc2f97b1bec1211f2813e1e27975e63555131

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@kubernetes-node00 work0]#


[root@kubernetes-node01 work1]# kubeadm join 192.168.20.195:6443 --token dl2o54.j4oluk83qz1oq3jt \

--discovery-token-ca-cert-hash sha256:dcaa27862e75c5fef5d0c30bb0bfc2f97b1bec1211f2813e1e27975e63555131

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@kubernetes-node01 work1]#
########################################################

安装网络插件flannel

【当然也可以安装weave代替flannel】
【以下操作仅在master机器上执行】
[root@kubernetes-master master]# yum install -y flannel
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
base | 3.6 kB 00:00:00
extras | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
Resolving Dependencies
.............................................................................................................
Complete!
[root@kubernetes-master master]# systemctl daemon-reload && systemctl enable flanneld && systemctl start flanneld
[root@kubernetes-master master]# systemctl status flanneld
[root@kubernetes-master master]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.15.0 d235b23c3570 10 months ago 82.4MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.15.0 201c7a840312 10 months ago 207MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.15.0 8328bb49b652 10 months ago 159MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.15.0 2d3813851e87 10 months ago 81.1MB
registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 15 months ago 40.3MB
registry.aliyuncs.com/google_containers/etcd 3.3.10 2c4adeb21b4f 16 months ago 258MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
[root@kubernetes-master master]# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
--2020-04-15 02:03:00-- https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.76.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.76.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10599 (10K) [text/plain]
Saving to: ‘kube-flannel.yml’

100%[=========================================================================================================>] 10,599 12.1KB/s in 0.9s

2020-04-15 02:03:02 (12.1 KB/s) - ‘kube-flannel.yml’ saved [10599/10599]

[root@kubernetes-master master]# ls
Desktop Documents Downloads kube-flannel.yml Music Pictures Public ssl Templates Videos
[root@kubernetes-master master]# vim kube-flannel.yml
[root@kubernetes-master master]# cat -n kube-flannel.yml|grep lizhenliang/flannel:v0.11.0-amd64
106 image: lizhenliang/flannel:v0.11.0-amd64
120 image: lizhenliang/flannel:v0.11.0-amd64
[root@kubernetes-master master]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@kubernetes-master master]#
[root@kubernetes-master master]# ps -ef|grep flannel
root 19710 19692 0 02:05 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root 24635 1 0 02:16 ? 00:00:00 /usr/bin/flanneld -etcd-endpoints=http://127.0.0.1:2379 -etcd-prefix=/atomic.io/network
root 24883 11529 0 02:16 pts/0 00:00:00 grep --color=auto flannel
[root@kubernetes-master master]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 26m v1.15.0
kubernetes-node00 Ready 21m v1.15.0
kubernetes-node01 Ready 21m v1.15.0
[root@kubernetes-master master]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@kubernetes-master master]#
[root@kubernetes-master master]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.15.0 d235b23c3570 10 months ago 82.4MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.15.0 201c7a840312 10 months ago 207MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.15.0 8328bb49b652 10 months ago 159MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.15.0 2d3813851e87 10 months ago 81.1MB
lizhenliang/flannel v0.11.0-amd64 ff281650a721 14 months ago 52.6MB
registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 15 months ago 40.3MB
registry.aliyuncs.com/google_containers/etcd 3.3.10 2c4adeb21b4f 16 months ago 258MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
[root@kubernetes-master master]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-rvhmm 1/1 Running 0 33m
coredns-bccdc95cf-s266g 1/1 Running 0 33m
etcd-kubernetes-master 1/1 Running 0 32m
kube-apiserver-kubernetes-master 1/1 Running 0 32m
kube-controller-manager-kubernetes-master 1/1 Running 0 32m
kube-flannel-ds-amd64-58dg4 1/1 Running 0 19m
kube-flannel-ds-amd64-8msgr 1/1 Running 0 19m
kube-flannel-ds-amd64-xxc8b 1/1 Running 0 19m
kube-proxy-28xmf 1/1 Running 0 33m
kube-proxy-dc5nr 1/1 Running 0 28m
kube-proxy-qtx5j 1/1 Running 0 28m
kube-scheduler-kubernetes-master 1/1 Running 0 32m
[root@kubernetes-master master]#

孟伯,20200411

交流联系:微信 1807479153 ,QQ 1807479153