如果你觉得网上的资料乱七八糟不知道怎么下手,意外找到我这边文章,看完还是这么觉得不知道如何下手的话!请你一定要留下你的评论。我改!
Master 服务器
ip 地址与 token 的已进行脱敏处理
[root@master-node ~]#
# 初始化之前先获取节点信息看下会返回什么
[root@master-node ~]# kubectl get nodes
W0429 15:32:30.852899 22961 loader.go:223] Config not found: /etc/kubernetes/admin.conf
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
# 使用 kubeadm 初始化集群
[root@master-node ~]# kubeadm init
W0429 15:32:43.913093 22982 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Hostname]: hostname "master-node" could not be reached
[WARNING Hostname]: hostname "master-node": lookup master-node on 100.100.2.138:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master-node kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 {ip_address}]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master-node localhost] and IPs [{ip_address} 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master-node localhost] and IPs [{ip_address} 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0429 15:32:48.621391 22982 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0429 15:32:48.622307 22982 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.502334 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-node as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master-node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ***
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join {ip_addr}:{port} --token *** \
--discovery-token-ca-cert-hash sha256:******
[root@master-node ~]#
[root@master-node ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node NotReady master 4m16s v1.18.2
# 忘记怎么操作的了,好像是安装了网络就变 Ready
[root@master-node ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node Ready master 4h33m v1.17.0
Node 节点服务器
节点服务器需要的服务和 Master 是一样的... 我是直接把 Master 服务器 kubectl reset
之后保存为一个镜像,基于这个镜像创建新的实例。
# 编写加入集群的配置
[root@node ~]# cat join-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: master_ip_address:6443
token: ***token***
unsafeSkipCAVerification: true
tlsBootstrapToken: ***token***
[root@node ~]#
[root@node ~]# kubeadm join --config=join-config.yaml --v=5
W0430 16:00:55.109050 9774 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
I0430 16:00:55.109111 9774 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
I0430 16:00:55.109125 9774 joinconfiguration.go:75] loading configuration from "join-config.yaml"
W0430 16:00:55.109334 9774 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0430 16:00:55.110067 9774 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
[preflight] Running pre-flight checks
I0430 16:00:55.110134 9774 preflight.go:90] [preflight] Running general checks
I0430 16:00:55.110181 9774 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0430 16:00:55.110217 9774 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
I0430 16:00:55.110224 9774 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0430 16:00:55.110231 9774 checks.go:102] validating the container runtime
I0430 16:00:55.164514 9774 checks.go:128] validating if the service is enabled and active
I0430 16:00:55.226154 9774 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0430 16:00:55.226202 9774 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0430 16:00:55.226230 9774 checks.go:649] validating whether swap is enabled or not
I0430 16:00:55.226258 9774 checks.go:376] validating the presence of executable ip
I0430 16:00:55.226294 9774 checks.go:376] validating the presence of executable iptables
I0430 16:00:55.226310 9774 checks.go:376] validating the presence of executable mount
I0430 16:00:55.226326 9774 checks.go:376] validating the presence of executable nsenter
I0430 16:00:55.226352 9774 checks.go:376] validating the presence of executable ebtables
I0430 16:00:55.226366 9774 checks.go:376] validating the presence of executable ethtool
I0430 16:00:55.226378 9774 checks.go:376] validating the presence of executable socat
I0430 16:00:55.226394 9774 checks.go:376] validating the presence of executable tc
I0430 16:00:55.226407 9774 checks.go:376] validating the presence of executable touch
I0430 16:00:55.226430 9774 checks.go:520] running all checks
I0430 16:00:55.286769 9774 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[WARNING Hostname]: hostname "node1" could not be reached
[WARNING Hostname]: hostname "node1": lookup node1 on 100.100.2.138:53: no such host
I0430 16:00:55.287674 9774 checks.go:618] validating kubelet version
I0430 16:00:55.343775 9774 checks.go:128] validating if the service is enabled and active
I0430 16:00:55.351944 9774 checks.go:201] validating availability of port 10250
I0430 16:00:55.352098 9774 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0430 16:00:55.352113 9774 checks.go:432] validating if the connectivity type is via proxy or direct
I0430 16:00:55.352156 9774 join.go:441] [preflight] Discovering cluster-info
I0430 16:00:55.352233 9774 token.go:188] [discovery] Trying to connect to API Server "master_ip_address:6443"
I0430 16:00:55.352829 9774 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://master_ip_address:6443"
I0430 16:00:55.365934 9774 token.go:103] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.31.133.56:6443"
I0430 16:00:55.365953 9774 token.go:194] [discovery] Successfully established connection with API Server "master_ip_address:6443"
I0430 16:00:55.365979 9774 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0430 16:00:55.365991 9774 join.go:455] [preflight] Fetching init configuration
I0430 16:00:55.365996 9774 join.go:493] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I0430 16:00:55.408691 9774 interface.go:400] Looking for default routes with IPv4 addresses
I0430 16:00:55.408704 9774 interface.go:405] Default route transits interface "eth0"
I0430 16:00:55.408792 9774 interface.go:208] Interface eth0 is up
I0430 16:00:55.408833 9774 interface.go:256] Interface "eth0" has 1 addresses :[node_ip_address/20].
I0430 16:00:55.408849 9774 interface.go:223] Checking addr node_ip_address/20.
I0430 16:00:55.408876 9774 interface.go:230] IP found node_ip_address
I0430 16:00:55.408890 9774 interface.go:262] Found valid IPv4 address node_ip_address for interface "eth0".
I0430 16:00:55.408898 9774 interface.go:411] Found active IP node_ip_address
I0430 16:00:55.408932 9774 preflight.go:101] [preflight] Running configuration dependant checks
I0430 16:00:55.408943 9774 controlplaneprepare.go:211] [download-certs] Skipping certs download
I0430 16:00:55.408958 9774 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0430 16:00:55.409892 9774 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I0430 16:00:55.410529 9774 kubelet.go:133] [kubelet-start] Stopping the kubelet
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0430 16:00:57.086851 9774 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node
I0430 16:00:57.086889 9774 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
校验 Node 节点是否加入到集群当中
[root@master-node ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node NotReady 68m v1.17.0
master-node NotReady master 4h31m v1.17.0
看到有两个节点,说明加入成功了。然后注意到上面都是 NotReady ,这里需要安装网络拓展了。
这里我安装的是 Calico
[root@master-node ~]# kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
然后等一会,重新执行 kubectl get nodes
看看有木有变成 Ready。
这里我是在 node节点 执行 kubeadm reset
命令,重新加入集群了,结果是OK哒。然后集群初始化就算是完了。
[root@master-node ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node Ready 69m v1.17.0
master-node Ready master 4h32m v1.17.0