1.请描述docker和k8s的关系
Docker是一个开源的应用容器引擎
k8s是一个开源的容器集群管理系统,可以实现容器集群的自动化部署、自动扩缩容、维护等功能。
2.为什么要使用k8s,k8s可以实现哪些功能
Kubernetes 为你提供:
服务发现和负载均衡
存储编排
自动部署和回滚
自动完成装箱计算
自我修复
密钥与配置管理
3.K8s组件有哪些?并且简述每个组件的作用?
Kubernetes 组件
控制平面组件(Control Plane Components)
控制平面的组件对集群做出全局决策(比如调度),以及检测和响应集群事件(例如,当不满足部署的 replicas 字段时,启动新的 pod)。
控制平面组件可以在集群中的任何节点上运行。 然而,为了简单起见,设置脚本通常会在同一个计算机上启动所有控制平面组件, 并且不会在此计算机上运行用户容器。 请参阅使用 kubeadm 构建高可用性集群 中关于多 VM 控制平面设置的示例。
kube-apiserver
API 服务器是 Kubernetes 控制面的组件, 该组件公开了 Kubernetes API。 API 服务器是 Kubernetes 控制面的前端。
Kubernetes API 服务器的主要实现是 kube-apiserver。 kube-apiserver 设计上考虑了水平伸缩,也就是说,它可通过部署多个实例进行伸缩。 你可以运行 kube-apiserver 的多个实例,并在这些实例之间平衡流量。
etcd
etcd 是兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。
您的 Kubernetes 集群的 etcd 数据库通常需要有个备份计划。
要了解 etcd 更深层次的信息,请参考 etcd 文档。
kube-scheduler
控制平面组件,负责监视新创建的、未指定运行节点(node)的 Pods,选择节点让 Pod 在上面运行。
调度决策考虑的因素包括单个 Pod 和 Pod 集合的资源需求、硬件/软件/策略约束、亲和性和反亲和性规范、数据位置、工作负载间的干扰和最后时限。
kube-controller-manager
运行控制器进程的控制平面组件。
从逻辑上讲,每个控制器都是一个单独的进程, 但是为了降低复杂性,它们都被编译到同一个可执行文件,并在一个进程中运行。
这些控制器包括:
节点控制器(Node Controller): 负责在节点出现故障时进行通知和响应
任务控制器(Job controller): 监测代表一次性任务的 Job 对象,然后创建 Pods 来运行这些任务直至完成
端点控制器(Endpoints Controller): 填充端点(Endpoints)对象(即加入 Service 与 Pod)
服务帐户和令牌控制器(Service Account & Token Controllers): 为新的命名空间创建默认帐户和 API 访问令牌
cloud-controller-manager
云控制器管理器是指嵌入特定云的控制逻辑的 控制平面组件。 云控制器管理器使得你可以将你的集群连接到云提供商的 API 之上, 并将与该云平台交互的组件同与你的集群交互的组件分离开来。
cloud-controller-manager 仅运行特定于云平台的控制回路。 如果你在自己的环境中运行 Kubernetes,或者在本地计算机中运行学习环境, 所部署的环境中不需要云控制器管理器。
与 kube-controller-manager 类似,cloud-controller-manager 将若干逻辑上独立的 控制回路组合到同一个可执行文件中,供你以同一进程的方式运行。 你可以对其执行水平扩容(运行不止一个副本)以提升性能或者增强容错能力。
下面的控制器都包含对云平台驱动的依赖:
节点控制器(Node Controller): 用于在节点终止响应后检查云提供商以确定节点是否已被删除
路由控制器(Route Controller): 用于在底层云基础架构中设置路由
服务控制器(Service Controller): 用于创建、更新和删除云提供商负载均衡器
Node 组件
节点组件在每个节点上运行,维护运行的 Pod 并提供 Kubernetes 运行环境。
kubelet
一个在集群中每个节点(node)上运行的代理。 它保证容器(containers)都 运行在 Pod 中。
kubelet 接收一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs 中描述的容器处于运行状态且健康。 kubelet 不会管理不是由 Kubernetes 创建的容器。
kube-proxy
kube-proxy 是集群中每个节点上运行的网络代理, 实现 Kubernetes 服务(Service) 概念的一部分。
kube-proxy 维护节点上的网络规则。这些网络规则允许从集群内部或外部的网络会话与 Pod 进行网络通信。
如果操作系统提供了数据包过滤层并可用的话,kube-proxy 会通过它来实现网络规则。否则, kube-proxy 仅转发流量本身。
容器运行时(Container Runtime)
容器运行环境是负责运行容器的软件。
Kubernetes 支持多个容器运行环境: Docker、 containerd、CRI-O 以及任何实现 Kubernetes CRI (容器运行环境接口)。
插件(Addons)
插件使用 Kubernetes 资源(DaemonSet、 Deployment等)实现集群功能。 因为这些插件提供集群级别的功能,插件中命名空间域的资源属于 kube-system 命名空间。
下面描述众多插件中的几种。有关可用插件的完整列表,请参见 插件(Addons)。
DNS
尽管其他插件都并非严格意义上的必需组件,但几乎所有 Kubernetes 集群都应该 有集群 DNS, 因为很多示例都需要 DNS 服务。
集群 DNS 是一个 DNS 服务器,和环境中的其他 DNS 服务器一起工作,它为 Kubernetes 服务提供 DNS 记录。
Kubernetes 启动的容器自动将此 DNS 服务器包含在其 DNS 搜索列表中。
Web 界面(仪表盘)
Dashboard 是 Kubernetes 集群的通用的、基于 Web 的用户界面。 它使用户可以管理集群中运行的应用程序以及集群本身并进行故障排除。
容器资源监控
容器资源监控 将关于容器的一些常见的时间序列度量值保存到一个集中的数据库中,并提供用于浏览这些数据的界面。
集群层面日志
集群层面日志 机制负责将容器的日志数据 保存到一个集中的日志存储中,该存储能够提供搜索和浏览接口。
4.什么是POD,POD和容器的关系是什么
pod是k8s调度的最小单元。1个pod可以包含1个或多个容器,可以理解为pod是容器集合。 pod相当于逻辑主机,每个pod通过describe可以看到都有自己的ip地址。
5.简述K8s创建Pod流程?
kubernetes 创建Pod 的 工作流:
step.1
kubectl 向 k8s api server 发起一个create pod 请求(即我们使用Kubectl敲一个create pod命令) 。
step.2
k8s api server接收到pod创建请求后,不会去直接创建pod;而是生成一个包含创建信息的yaml。
step.3
apiserver 将刚才的yaml信息写入etcd数据库。到此为止仅仅是在etcd中添加了一条记录, 还没有任何的实质性进展。
step.4
scheduler 查看 k8s api ,类似于通知机制。
首先判断:pod.spec.Node == null?
若为null,表示这个Pod请求是新来的,需要创建;因此先进行调度计算,找到最“闲”的node。
然后将信息在etcd数据库中更新分配结果:pod.spec.Node = nodeA (设置一个具体的节点)
ps:同样上述操作的各种信息也要写到etcd数据库中中。
step.5
kubelet 通过监测etcd数据库(即不停地看etcd中的记录),发现 k8s api server 中有了个新的Node;
如果这条记录中的Node与自己的编号相同(即这个Pod由scheduler分配给自己了);
则调用node中的docker api,创建container。
6.请总结使用kubeadm安装k8s集群的步骤
7.如果使用kubeadm部署node节点,但是忘记了join命令怎么办
kubeadm token create --print-join-command
8.自己调研,如何讲一个node节点从k8s集群里删除
kubectl delete nodes node1
1.[root@master ~/flannel/Documentation]# kubectl delete nodes node1
2.[root@master ~/flannel/Documentation]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 3h5m v1.19.3
node2 Ready node 3h v1.19.3
3.[root@master ~/flannel/Documentation]# kubeadm token create --print-join-command
W0302 22:56:21.121850 71697 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 10.0.0.10:6443 --token g0zbq9.stcnkylsl1k5l5m5 --discovery-token-ca-cert-hash sha256:0656e106e6997f105296102789a262f368912ea7d13662a8cbc1818d360b84d8
4.[root@node1 /etc/yum.repos.d]# kubeadm join 10.0.0.10:6443 --token g0zbq9.stcnkylsl1k5l5m5 --discovery-token-ca-cert-hash sha256:0656e106e6997f105296102789a262f368912ea7d13662a8cbc1818d360b84d8
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
5.[root@node1 /etc/yum.repos.d]# rm /etc/kubernetes/kubelet.conf
rm: remove regular file ‘/etc/kubernetes/kubelet.conf’? y
6.[root@node1 /etc/yum.repos.d]# rm /etc/kubernetes/pki/ca.crt
rm: remove regular file ‘/etc/kubernetes/pki/ca.crt’? y
7.[root@node1 /etc/yum.repos.d]# kubeadm join 10.0.0.10:6443 --token g0zbq9.stcnkylsl1k5l5m5 --discovery-token-ca-cert-hash sha256:0656e106e6997f105296102789a262f368912ea7d13662a8cbc1818d360b84d8
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
8.[root@node1 /etc/yum.repos.d]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1046/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1188/master
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 3650/kubelet
tcp 0 0 127.0.0.1:43407 0.0.0.0:* LISTEN 3650/kubelet
tcp6 0 0 :::22 :::* LISTEN 1046/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1188/master
tcp6 0 0 :::10250 :::* LISTEN 3650/kubelet
udp 0 0 0.0.0.0:123 0.0.0.0:* 71325/ntpdate
udp 0 0 0.0.0.0:8472 0.0.0.0:* -
udp 0 0 127.0.0.1:323 0.0.0.0:* 671/chronyd
udp6 0 0 :::123 :::* 71325/ntpdate
udp6 0 0 ::1:323 :::* 671/chronyd
9.[root@node1 /etc/yum.repos.d]# pkill kubelet
10.[root@node1 /etc/yum.repos.d]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1046/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1188/master
tcp6 0 0 :::22 :::* LISTEN 1046/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1188/master
udp 0 0 0.0.0.0:8472 0.0.0.0:* -
udp 0 0 127.0.0.1:323 0.0.0.0:* 671/chronyd
udp6 0 0 ::1:323 :::* 671/chronyd
11.[root@node1 /etc/yum.repos.d]# kubeadm join 10.0.0.10:6443 --token g0zbq9.stcnkylsl1k5l5m5 --discovery-token-ca-cert-hash sha256:0656e106e6997f105296102789a262f368912ea7d13662a8cbc1818d360b84d8
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
12.[root@master ~/flannel/Documentation]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 3h9m v1.19.3
node1 NotReady 6s v1.19.3
node2 Ready node 3h4m v1.19.3
13.[root@master ~/flannel/Documentation]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 3h9m v1.19.3
node1 Ready 11s v1.19.3
node2 Ready node 3h4m v1.19.3
14.[root@master ~/flannel/Documentation]# kubectl label nodes node1 node-role.kubernetes.io/node=
node/node1 labeled
15.[root@master ~/flannel/Documentation]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 3h10m v1.19.3
node1 Ready node 65s v1.19.3
node2 Ready node 3h5m v1.19.3
9.自己调研,已经初始化好的master节点如何清空配置,恢复机器未部署k8s时原来的状态,并重新初始化
第一步现在master主机上操作
1.[root@master ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "master" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
2.[root@master ~]# kubectl get nodes
The connection to the server 10.0.0.10:6443 was refused - did you specify the right host or port?
3.[root@master ~]# ll -a
total 44
dr-xr-x---. 5 root root 210 Mar 2 20:12 .
dr-xr-xr-x. 17 root root 244 Oct 30 23:14 ..
-rw-------. 1 root root 1276 Oct 30 22:25 anaconda-ks.cfg
-rw-------. 1 root root 6594 Mar 2 21:11 .bash_history
-rw-r--r--. 1 root root 18 Dec 29 2013 .bash_logout
-rw-r--r--. 1 root root 176 Dec 29 2013 .bash_profile
-rw-r--r--. 1 root root 176 Dec 29 2013 .bashrc
-rw-r--r--. 1 root root 100 Dec 29 2013 .cshrc
drwxr-xr-x 14 root root 4096 Mar 2 20:05 flannel
drwxr-xr-x 3 root root 33 Mar 2 19:54 .kube
drwxr----- 3 root root 19 Mar 2 19:21 .pki
-rw-r--r-- 1 root root 300 Oct 30 23:33 set_init.sh
-rw-r--r--. 1 root root 129 Dec 29 2013 .tcshrc
-rw------- 1 root root 3543 Mar 2 20:12 .viminfo
4.[root@master ~]# rm -rf .kube
5.[root@master ~]# ls
anaconda-ks.cfg flannel set_init.sh
[root@master ~]# rm -rf /etc/kubernetes/*
[root@master ~]# mkdir .kube
[root@master ~]# rm -rf /root/.kube/
[root@master ~]# ll -a
total 44
dr-xr-x---. 4 root root 197 Mar 2 23:27 .
dr-xr-xr-x. 17 root root 244 Oct 30 23:14 ..
-rw-------. 1 root root 1276 Oct 30 22:25 anaconda-ks.cfg
-rw-------. 1 root root 6594 Mar 2 21:11 .bash_history
-rw-r--r--. 1 root root 18 Dec 29 2013 .bash_logout
-rw-r--r--. 1 root root 176 Dec 29 2013 .bash_profile
-rw-r--r--. 1 root root 176 Dec 29 2013 .bashrc
-rw-r--r--. 1 root root 100 Dec 29 2013 .cshrc
drwxr-xr-x 14 root root 4096 Mar 2 20:05 flannel
drwxr----- 3 root root 19 Mar 2 19:21 .pki
-rw-r--r-- 1 root root 300 Oct 30 23:33 set_init.sh
-rw-r--r--. 1 root root 129 Dec 29 2013 .tcshrc
-rw------- 1 root root 3543 Mar 2 20:12 .viminfo
5.[root@master ~]# mkdir .kube
[root@master ~]# ls -la
total 44
dr-xr-x---. 5 root root 210 Mar 2 23:28 .
dr-xr-xr-x. 17 root root 244 Oct 30 23:14 ..
-rw-------. 1 root root 1276 Oct 30 22:25 anaconda-ks.cfg
-rw-------. 1 root root 6594 Mar 2 21:11 .bash_history
-rw-r--r--. 1 root root 18 Dec 29 2013 .bash_logout
-rw-r--r--. 1 root root 176 Dec 29 2013 .bash_profile
-rw-r--r--. 1 root root 176 Dec 29 2013 .bashrc
-rw-r--r--. 1 root root 100 Dec 29 2013 .cshrc
drwxr-xr-x 14 root root 4096 Mar 2 20:05 flannel
drwxr-xr-x 2 root root 6 Mar 2 23:28 .kube
drwxr----- 3 root root 19 Mar 2 19:21 .pki
-rw-r--r-- 1 root root 300 Oct 30 23:33 set_init.sh
-rw-r--r--. 1 root root 129 Dec 29 2013 .tcshrc
-rw------- 1 root root 3543 Mar 2 20:12 .viminfo
[root@master ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: inactive (dead) since Wed 2022-03-02 23:23:41 CST; 6min ago
Docs: https://kubernetes.io/docs/
Process: 4386 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=0/SUCCESS)
Main PID: 4386 (code=exited, status=0/SUCCESS)
Mar 02 20:20:53 master kubelet[4386]: I0302 20:20:53.541341 4386 reconciler.go:2...")
Mar 02 20:20:53 master kubelet[4386]: I0302 20:20:53.541356 4386 reconciler.go:2...")
Mar 02 22:56:44 master kubelet[4386]: I0302 22:56:44.294888 4386 topology_manage...er
Mar 02 22:56:44 master kubelet[4386]: I0302 22:56:44.398165 4386 reconciler.go:224...
Mar 02 22:56:44 master kubelet[4386]: I0302 22:56:44.398225 4386 reconciler.go:224...
Mar 02 22:56:45 master kubelet[4386]: W0302 22:56:45.693253 4386 pod_container_d...rs
Mar 02 22:56:45 master kubelet[4386]: map[string]interface {}{"cniVersion":"0.3.1", "...
Mar 02 23:23:41 master systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Mar 02 23:23:41 master kubelet[4386]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMas...
Mar 02 23:23:41 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Hint: Some lines were ellipsized, use -l to show in full.
第二步在node1节点操作
1.[root@node1 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0302 23:25:46.880279 81781 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
2.[root@node1 ~]# ls -la
total 40
dr-xr-x---. 3 root root 182 Mar 2 19:22 .
dr-xr-xr-x. 17 root root 244 Oct 30 23:14 ..
-rw-------. 1 root root 1276 Oct 30 22:25 anaconda-ks.cfg
-rw-------. 1 root root 6280 Mar 2 21:11 .bash_history
-rw-r--r--. 1 root root 18 Dec 29 2013 .bash_logout
-rw-r--r--. 1 root root 176 Dec 29 2013 .bash_profile
-rw-r--r--. 1 root root 176 Dec 29 2013 .bashrc
-rw-r--r--. 1 root root 100 Dec 29 2013 .cshrc
drwxr----- 3 root root 19 Mar 2 19:22 .pki
-rw-r--r-- 1 root root 300 Oct 30 23:33 set_init.sh
-rw-r--r--. 1 root root 129 Dec 29 2013 .tcshrc
-rw------- 1 root root 3278 Feb 23 11:03 .viminfo
第三步在node2节点操作
[root@node2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0302 23:19:24.363130 79419 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@node2 ~]# ls -a
. .. anaconda-ks.cfg .bash_history .bash_logout .bash_profile .bashrc .cshrc .pki set_init.sh .tcshrc .viminfo
[root@node2 ~]# ls -la
total 40
dr-xr-x---. 3 root root 182 Mar 2 19:22 .
dr-xr-xr-x. 17 root root 244 Oct 30 23:14 ..
-rw-------. 1 root root 1276 Oct 30 22:25 anaconda-ks.cfg
-rw-------. 1 root root 6295 Mar 2 21:12 .bash_history
-rw-r--r--. 1 root root 18 Dec 29 2013 .bash_logout
-rw-r--r--. 1 root root 176 Dec 29 2013 .bash_profile
-rw-r--r--. 1 root root 176 Dec 29 2013 .bashrc
-rw-r--r--. 1 root root 100 Dec 29 2013 .cshrc
drwxr----- 3 root root 19 Mar 2 19:22 .pki
-rw-r--r-- 1 root root 300 Oct 30 23:33 set_init.sh
-rw-r--r--. 1 root root 129 Dec 29 2013 .tcshrc
-rw------- 1 root root 3764 Feb 27 23:42 .viminfo
[root@node2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0302 23:28:47.144749 79658 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
第四步 master节点初始化
1.[root@master ~]# kubeadm init \
> --apiserver-advertise-address=10.0.0.10 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.19.3 \
> --service-cidr=10.1.0.0/16 \
> --pod-network-cidr=10.2.0.0/16 \
> --service-dns-domain=cluster.local \
> --ignore-preflight-errors=Swap \
> --ignore-preflight-errors=NumCPU
W0302 23:30:48.373234 83158 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.1.0.1 10.0.0.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.0.0.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.0.0.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 25.562715 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: qcjxbh.u7tfagp5ad3hsqr2
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.10:6443 --token qcjxbh.u7tfagp5ad3hsqr2 \
--discovery-token-ca-cert-hash sha256:13e81b1238e84a8a976f4823efcdd01f08bc759cf0d6041f2ba695046f1f3613
2.创建目录
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
第五步 node1加入
[root@node1 ~]# kubeadm join 10.0.0.10:6443 --token qcjxbh.u7tfagp5ad3hsqr2 \
> --discovery-token-ca-cert-hash sha256:13e81b1238e84a8a976f4823efcdd01f08bc759cf0d6041f2ba695046f1f3613
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
第六步 node2加入节点
[root@node2 ~]# kubeadm join 10.0.0.10:6443 --token qcjxbh.u7tfagp5ad3hsqr2 \
> --discovery-token-ca-cert-hash sha256:13e81b1238e84a8a976f4823efcdd01f08bc759cf0d6041f2ba695046f1f3613
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
第七步 master主机查看
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 76s v1.19.3
node1 Ready 19s v1.19.3
node2 NotReady 3s v1.19.3
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 79s v1.19.3
node1 Ready 22s v1.19.3
node2 NotReady 6s v1.19.3
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 81s v1.19.3
node1 Ready 24s v1.19.3
node2 NotReady 8s v1.19.3
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 87s v1.19.3
node1 Ready 30s v1.19.3
node2 Ready 14s v1.19.3
第八步 master主机修改pod节点名称
[root@master ~]# kubectl label nodes node1 node-role.kubernetes.io/node=
node/node1 labeled
[root@master ~]# kubectl label nodes node2 node-role.kubernetes.io/node=
node/node2 labeled
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 117s v1.19.3
node1 Ready node 60s v1.19.3
node2 Ready node 44s v1.19.3
第九步 master主机设置kube-proxy使用ipvs模式
修改第44行
kubectl edit cm kube-proxy -n kube-system
44 mode: "ipvs"
[root@master ~]# kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
第十步 master主机重启kube-proxy
kubectl -n kube-system get pod|grep kube-proxy|awk '{print "kubectl -n kube-system delete pod "$1}'|bash
[root@master ~]# kubectl -n kube-system get pod|grep kube-proxy|awk '{print "kubectl -n kube-system delete pod "$1}'|bash
pod "kube-proxy-7hsld" deleted
pod "kube-proxy-8pdb4" deleted
pod "kube-proxy-9t8r8" deleted
第十一步 master主机重新查看pod
[root@master ~]# kubectl -n kube-system get pod|grep kube-proxy
kube-proxy-8szql 1/1 Running 0 19s
kube-proxy-9czs6 1/1 Running 0 16s
kube-proxy-njgp4 1/1 Running 0 21s
第十二步 检查IPVS
[root@master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.0.1:443 rr
-> 10.0.0.10:6443 Masq 1 0 0
TCP 10.1.0.10:53 rr
-> 10.2.0.2:53 Masq 1 0 0
-> 10.2.0.3:53 Masq 1 0 0
TCP 10.1.0.10:9153 rr
-> 10.2.0.2:9153 Masq 1 0 0
-> 10.2.0.3:9153 Masq 1 0 0
UDP 10.1.0.10:53 rr
-> 10.2.0.2:53 Masq 1 0 0
-> 10.2.0.3:53 Masq 1 0 0