mac + 虚拟机 安装多节点K8S平台实践(v1.18.1)

MAC + 虚拟机(centOS7) 安装多节点K8S平台实践(v1.18.1)

花了一天的实践终于有空做这个实践了,之前一直用Mac的 Docker + 自带Kubernetes 单节点环境,很多东西糊里糊涂的。而且不想在本机上装Docker了,现在的K8S都升级到了1.8.1了,加上最近在公司也在玩内部的平台,感觉有些东西还是没能理解,尤其是在平台级别的应用。所以借次机会在实践和整理一下。KubeAdmin工具现在也很方便了,虽然 碰到了很多坑点(参考坑点),也对一些知识理解更加深入,Mac再也不用装Docker了。

参考资料:
https://www.kubernetes.org.cn/7189.html
https://blog.csdn.net/qq_38900565/article/details/102585741
https://kubernetes.io/zh/docs/tasks/tools/install-kubectl/
https://blog.csdn.net/twingao/article/details/105382305

环境和部署计划

软件说明:

  1. VMware Fusion 专业版 11.5 (windows的更加容易找)
  2. CentOS Linux release 7.7.1908 (Core)
  3. docker-ce-cli.x86_64 1:19.03.8-3.el7
  4. Kubernetes 1.18.1
host name ip description
k8s-master 192.168.1.15 Master
k8s-node1 192.168.1.16 node1
k8s-node2 192.168.1.17 node2
k8s-node3 192.168.1.19 node3
k8s-nfs 192.168.1.18 NFS服务器

准备基础镜像

不得不说有镜像这个东西,能让工作方便很多。基础镜像可以给所有节点使用。

  1. 系统设置

  • 关闭swape,Kubernetes Amin 安装必须要关闭swap
sed -ri 's/.*swap.*/#&/' /etc/fstab
查看
[root@k8s-master /]# free
              total        used        free      shared  buff/cache   available
Mem:        1863088      880148      135824       10288      847116      820336
Swap:             0           0           0
[root@k8s-master /]# 

- [ ] 设置iptable bridge

cat <<EOF> /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

modprobe br_netfilter
sysctl --system
-----output----
[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1    <!--这里会显示 ->
net.bridge.bridge-nf-call-iptables = 1     <!--effective here ->
* Applying /etc/sysctl.conf ...

- [ ] 关闭selinux

vi /etc/selinux/config
修改
SELINUX=disabled

---注意这里千万别改错成SELINUXTYPE,这个会令系统无法启动---

- [ ] 更新IP的主机

vim /etc/hosts

#
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1       localhost
192.168.1.15  k8s-master
192.168.1.16  k8s-node1
192.168.1.17  k8s-node2
192.168.1.18  k8s-nfs
192.168.1.19  k8s-node3
255.255.255.255 broadcasthost
::1             localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section

vi /etc/hostname
k8s-master

修改IP地址 (我的是ens33,大家看看自己的系统对应的是什么名字)

vi /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.1.15
GATEWAY=192.168.1.1
NETMASK=255.255.255.0
DNS1=8.8.8.8
NAME=ens33
UUID=d2faf7a5-af21-4815-b630-7d27f718fac3
DEVICE=ens33
ONBOOT=yes

PS:这里没做免密SSH

- [ ] 关闭防火墙
本环境都是在局域网,学习研究之用。关闭防火墙比较方便,否则得去写入规则

systemctl stop firewalld.service
  1. 整理yum和docker的国内镜像

- [ ] 更新国内yum源(阿里镜像)添加Kubernetes 国内镜像源。

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
vim /etc/yum.repos.d/kubernetes.repo
##添加以下内容
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

ps:你也可以加在你现有的源(我加在 CentOS-Base.repo)

/etc/yum.repos.d 这个目录是yum的源目录

- [ ] 安装 Docker-CE
查看最新版本的docker-ce

yum list docker-ce
docker-ce.x86_64                                                                                3:19.03.8-3.el7   

直接安装吧

yum install docker-ce
  • 设置Docker国内镜像或者阿里云镜像加速(需要阿里账号)
vi /etc/docker/daemon.json

{
    "registry-mirrors": ["http://hub-mirror.c.163.com"]
}

可选 “registry.docker-cn.com”
阿里的镜像个人加速专有地址,请自行查询

设置开机自动启动 和启动服务

systemctl enable docker

systemctl start docker

3. 下载需要的Kubernetes包
现在有kubeadmin之后安装简单了很多,尤其现在国内的镜像都更新比较快不像以前那么苦逼去
这里应该默认列出最新版

yum list -y kubeadm
---
Available Packages
kubeadm.x86_64                                                               1.18.1-0
---
yum list -y kubelet
---
Available Packages
kubelet.x86_64                                                               1.18.1-0                   
---
yum list -y kubectl
---
Available Packages
kubectl.x86_64                                                               1.18.1-0

安装

yum install -y kubeadm-1.18.1-0 kubectl-1.18.1-0 kubelet-1.18.1-0

---
##设置成启动服务
systemctl enable kubelet

拉取7个镜像(以前没有国内源都是卡在这里)

kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers
---
W0411 06:23:31.541597    1980 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.18.1
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.7

-------
##检查
docker images
----
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.1             4e68534e24f6        2 days ago          117MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.1             a595af0107f9        2 days ago          173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.1             d1ccdd18e6ed        2 days ago          162MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.1             6c9320041a7b        2 days ago          95.3MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        8 weeks ago         683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        2 months ago        43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        5 months ago        288MB
  1. 创建NFS服务器
    为了方便挂载,安装NFS工具,服务器端之再设置
yum install nfs-utils

到这里备份一个VM镜像 ,之后为其他node做准备

安装master节点

备份完重启虚拟机,利用kubeadmin 初始化主节点

# –-pod-network-cidr:用于指定Pod的网络范围
# –-service-cidr:用于指定service的网络范围;
# --image-repository: 镜像仓库的地址,和提前下载的镜像仓库应该对应上。

kubeadm init --kubernetes-version=v1.18.1 \
  --pod-network-cidr=10.244.0.0/16 \
  --service-cidr=10.1.0.0/16 \
  --image-repository=registry.aliyuncs.com/google_containers

----

[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.1.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0411 07:10:37.735513    2243 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0411 07:10:37.737467    2243 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.002204 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ok3b7m.0krpvim883v5cnph
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.15:6443 --token ok3b7m.0krpvim883v5cnph \
    --discovery-token-ca-cert-hash sha256:9421cce0f840391776daeae90f348d989ebc0f60070a46f0480366e450947603 

按照知识去创建 .kube目录。 记住最后面的命令,之后节点需要用他来加入集群。 这样主节点就初始化完成

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

这个配置文件之后会scp到其他节点,作为远程kubeclt命令的配置文件。 因为之前我虚拟机安装过其他低级版本的kubenetes,这里会有一些坑。 有情趣看到后面的坑点。

增加节点

复制初始化之前备份的镜像,修改主机的ip名称。之后就可以加入集群了,使用以下命令

kubeadm join 192.168.1.15:6443 --token ok3b7m.0krpvim883v5cnph \
    --discovery-token-ca-cert-hash sha256:9421cce0f840391776daeae90f348d989ebc0f60070a46f0480366e450947603
    
--------
W0411 08:33:41.185208    1759 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

安装玩,使用kubectl get nodes 查看,发现无法连接apiserver,请用scp复制k8s-master的配置文件。让后就可以看到集群下的节点了

mkdir -p $HOME/.kube
scp k8s-master:/root/.kube/config $HOME/.kube/config
kubectl get nodes
--------

root@k8s-master .kube]# kubectl get nodes
NAME         AGE
k8s-master   1h
k8s-node1    2m
[root@k8s-master .kube]# 

这里估计是kubectl或者默认设置问题,没有显示其他具体信息。Mac上使用客户端查看正常。

依次次加入其他节点 k8s-node2, k8s-node3.

集群网络组件安装

加入集群之后,在mac kubectl查看集群node信息,发现节点还没就绪,原因是网络组件还没配置。 网上主要有Flannel,calico之类的,这里选flannel,有兴趣可以研究以下区别

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f kube-flannel.yml

这里问题不大,所有的节点都要安装包括主节点。

重启机器再查看node的情况就发现正常了,内外的IP显示出来了

Mac Kubectl 安装

参考官网, https://kubernetes.io/docs/tasks/tools/install-kubectl/

这里不使用brew去安装,brew非常慢,好像挂了,升级了caterlina之后,没时间去弄,这里主要是下载比较慢。 然后复制k8s-master节点就好

Kubernetes UI 2.0 (DashBoard)开启

这个是最折磨人的, Kubernetes UI2.0之 后默认的 的yaml文件,没有设置权限

https://kubernetes.io/zh/docs/tasks/access-application-cluster/web-ui-dashboard/

下载yaml文件

https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

修改service成为 NodePort 指定端口30001吧

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

保存,deploy到集群

kubectl apply -f recommended.yaml

添加cluster-admin的服务账号,醒的权限管理和要映射要cluster-admin才有超级用户权。所以这里添加一个用户

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

运行配置

kubectl apply -f admin-user.yaml

这样Kubernetes UI就基本配置好了,查看scheduler分配到哪个node

NAME                                            READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
pod/dashboard-metrics-scraper-dc6947fbf-mlx4n   1/1     Running   0          4h49m   10.244.3.10   k8s-node3   <none>           <none>
pod/kubernetes-dashboard-5d4dc8b976-w5brv       1/1     Running   0          4h49m   10.244.2.16   k8s-node2   <none>           <none>

NAME                                      TYPE                                  DATA   AGE
secret/admin-user-token-nt7gw             kubernetes.io/service-account-token   3      3h39m
secret/cluster-token-9k7x9                kubernetes.io/service-account-token   3      4h18m
secret/dashboard-admin-token-6qhjh        kubernetes.io/service-account-token   3      4h18m
secret/default-token-t9dc2                kubernetes.io/service-account-token   3      4h49m
secret/kubernetes-dashboard-certs         Opaque                                0      4h49m
secret/kubernetes-dashboard-csrf          Opaque                                1      4h49m
secret/kubernetes-dashboard-key-holder    Opaque                                2      4h49m
secret/kubernetes-dashboard-token-xx4cx   kubernetes.io/service-account-token   3      4h49m

看到是在node2上运行,应为我们已经配置了NodePort,所以可以直接访问 https://k8s-node2:30001

这里其实有个坑的,应为证书是自动生成的,不符合Chromes,IE这些浏览器安全验证,会有安全提示而导致无法访问。 解决方案有2个

  1. Chromes选项中跳过证书检查
  2. 按照官网推荐的工具生成TLS证书
  3. 使用firefox吧,可以直接跳过直接访问

成功后提示输入token, 这里其实有2个用户的,如果研究dashboard的yaml文件的。 dashboard默认的用户登陆上去没有任何权限。所以使用dashboard-admin用户去登陆。 运行就正常了。

这里我花了很长时间去研究他的用户,有兴趣可以参考文章。 强烈推荐
https://blog.csdn.net/qq_38900565/article/details/102585741

主要是查看用户,让后用 RBAC角色控制和认证 去限制用户的权限,比如namespace 管理。etc

推荐2个命令, 查看所有的集群角色和权限

kubectl get clusterrole
------
NAME                                                                   CREATED AT
admin                                                                  2020-04-11T11:10:58Z
cluster-admin                                                          2020-04-11T11:10:58Z
edit                                                                   2020-04-11T11:10:58Z
flannel                                                                2020-04-11T16:47:09Z
kubeadm:get-nodes                                                      2020-04-11T11:10:59Z
kubernetes-dashboard                                                   2020-04-12T08:40:19Z
system:aggregate-to-admin                                              2020-04-11T11:10:58Z
system:aggregate-to-edit                                               2020-04-11T11:10:58Z
system:aggregate-to-view                                               2020-04-11T11:10:58Z
system:auth-delegator                                                  2020-04-11T11:10:58Z
system:basic-user                                                      2020-04-11T11:10:58Z
system:certificates.k8s.io:certificatesigningrequests:nodeclient       2020-04-11T11:10:58Z
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient   2020-04-11T11:10:58Z
system:certificates.k8s.io:kube-apiserver-client-approver              2020-04-11T11:10:58Z

查看举起一个角色他和权力

➜  ~ kubectl get clusterrole  system:controller:namespace-controller
NAME                                     CREATED AT
system:controller:namespace-controller   2020-04-11T11:10:58Z
➜  ~ kubectl describe clusterrole system:controller:namespace-controller
Name:         system:controller:namespace-controller
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources            Non-Resource URLs  Resource Names  Verbs
  ---------            -----------------  --------------  -----
  *.*                  []                 []              [delete deletecollection get list]
  namespaces           []                 []              [delete get list watch]
  namespaces/finalize  []                 []              [update]
  namespaces/status    []                 []              [update]

或者更加详细

kubectl get clusterrole system:controller:namespace-controller -o yaml

---------

通过这些你可以去根据需求去配置管理namespace和隔离

如何查看ServiceAccount的令牌token

 ~ kubectl describe sa kubernetes-dashboard -n kubernetes-dashboard | grep Token
Tokens:              kubernetes-dashboard-token-xx4cx
~ kubectl describe secret kubernetes-dashboard-token-xx4cx -n kubernetes-dashboard

Type:  kubernetes.io/service-account-token

Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Im9pSmlrUGpnZW9oVks4MXl2UUVYVzFXUjFOZ09vS2FkaXNmS1pNNFU4MncifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi14eDRjeCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjM3ODFiNmZmLTFjMTMtNGY5Zi1hZDA0LTI1NjNhNzU5ZmU4MSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.aA3AYEcAahMya29JwGnOHA3PGjPYmd0BiGhoyWk_LYOXNZKWCul-jUbTa9KtYZFLsZ0hGGr3BgSXjAZRQXIMirONzwT5CjHn2nnUOTq1maGTz_Fpgox8DItn6jqcgqbYO_H2OMJp-7aRPOSwcQXOqvH-vrz1MSeD2jfXUCYCs1OtC2zqkPLnN0D1rPkD-CYqfTd9Mj8gWaIGFdw6e2FF05xyylGGYiqUE_ecj5RzVM6rzGsxuB3SuoqoL7iF30BMp4aY0DwoMqDWmFxVOSCnmGI-GqvRURHJ-dERbCXMwNvvnymde3KAzZmlpvg_OO8oZCTeeqwKyY_L8QeMKILrEQ
ca.crt:     1025 bytes

复制这个口令牌,登陆的时候输入就可以了。

NFS服务器的设置

安装服务NFS服务(之前镜像镜有安装,可以跳过)

yum install nfs-utils

设置服务器设置
创建exports文件

vim /etc/exports

# 第一部分: /home/nfs, 本地要共享出去的目录。
# 第二部分: 192.168.64.0/24 ,允许访问的主机,可以是一个IP:192.168.64.134,也可以是一个IP段:192.168.64.0/24. "*"表示所有
# 第三部分:
#     rw表示可读写,ro只读;
#     sync :同步模式,内存中数据时时写入磁盘;async :不同步,把内存中数据定期写入磁盘中;
#     no_root_squash :加上这个选项后,root用户就会对共享的目录拥有至高的权限控制,就像是对本机的目录操作一样。不安全,不建议使用;root_squash:和上面的选项对应,root用户对共享目录的权限不高,只有普通用户的权限,即限制了root;all_squash:不管使用NFS的用户是谁,他的身份都会被限定成为一个指定的普通用户身份;
#     anonuid/anongid :要和root_squash 以及all_squash一同使用,用于指定使用NFS的用户限定后的uid和gid,前提是本机的/etc/passwd中存在这个uid和gid。
#     fsid=0表示将/home/nfs整个目录包装成根目录,我这里配置成整个局域网都可以挂载
/home/nfs 192.168.1.0/24(rw,sync,no_root_squash)

和启动服务

systemctl enable rpcbind.service
---
systemctl enable nfs-server.service
---
systemctl start rpcbind.service
---
systemctl start nfs-server.service
---
查看服务
showmount -e k8s-nfs

Export list for k8s-nfs:
/home/nfs 192.168.1.0/24

在其他机器上用命令就可以挂载文件共享了

[root@k8s-node1 /]# mkdir test
[root@k8s-node1 /]# showmount -e k8s-nfs
Export list for k8s-nfs:
/home/nfs 192.168.1.0/24
[root@k8s-node1 /]# mount k8s-nfs:/home/nfs /test

[root@k8s-node1 test]# ls
ssl
服务器上的目录ssl已经可以看到

坑点和解决方案

遇到的常见问题

- [ ] kubelet 提示版本过低或者不对
- [ ] etcd 端口占用,/var/lib/etcd 非空
- [ ] node ping不通 Cluster IP

以上都是因为之前的版本卸载不干净造成
解决:

使用 whereis kubelet 查看残留
删除之前的残留

[root@k8s-nfs ~]# whereis kubelet
kubelet: /usr/bin/kubelet /usr/local/bin/kubelet

[root@k8s-nfs ~]# rm /usr/bin/kubelet
rm: remove regular file ‘/usr/bin/kubelet’? yes
[root@k8s-nfs ~]# rm /usr/local/bin/kubelet
rm: remove regular file ‘/usr/local/bin/kubelet’? yes

yum remove kubelet

重新安装kubelet,kubeadm

yum install -y kubeadm-1.18.1-0 kubelet-1.18.1-0

删除 etcd, 删除 /var/lib/etcd 下的文件

yum remove etcd

yum remove etcd

3. Node ping不通cluster-IP
原因是docker 1.13之后, iptables Forward chain为drop.
修改docker启动服务

vim /usr/lib/systemd/system/docker.service

iptables -P INPUT ACCEPT


iptables -P FORWARD ACCEPT


iptables -F


iptables -L -n

重新启动机器


你可能感兴趣的:(kubernetes,docker,linux)