kubeadm offline setup

 

下载部署包

oss 上下载 cabin 下 kubeadm-1.8.6 目录到本地,目录结构如下:

kubeadm-1.8.6/
├── files
│   ├── addons.tar.gz
│   ├── all-in-one.tar.gz
│   ├── CentOS-All-In-One-local.repo
│   └── rpms.tar.gz
├── images
│   └── kubeadm-images.tar.gz
├── setup.sh
└── SHA1SUM

初始化

每个节点都执行 setup.sh 初始化脚本

./setup.sh

安装 master 节点

kubeadm init --kubernetes-version=1.8.6 --pod-network-cidr=10.244.0.0/16

初始化完后,会有如下输出:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 025c8e.c8245df8da4a86a1 192.168.16.191:6443 --discovery-token-ca-cert-hash sha256:2c6bbd2ba565be12e43b034ce6cf99e14912285a0c8b86e1b0310060b9f2b406

执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl taint nodes --all node-role.kubernetes.io/master-

安装 addons

kubectl apply -f caicloud/addons/canal/
kubectl apply -f caicloud/addons/dashboard/
kubectl apply -f caicloud/addons/heapster/

添加 nodes

在需要添加的 node 上,先跑完 setup.sh,然后执行 kubeadm init 的输出,比如:

kubeadm join --token 025c8e.c8245df8da4a86a1 192.168.16.191:6443 --discovery-token-ca-cert-hash sha256:2c6bbd2ba565be12e43b034ce6cf99e14912285a0c8b86e1b0310060b9f2b406

访问 dashboard

通过浏览器访问 https://master_ip:30000

 

查看日志:查看 /var/log/messages系统日志

 

kubeadm 生成的token过期后,集群增加节点

默认token的有效期为24小时,当过期之后,该token就不可用了。解决方法

 

  1. 重新生成新的token
[root@c6v196 ~]# kubeadm token create
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --ttl 0)
44afbf.d86b6917cae63a27
[root@c6v196 ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
44afbf.d86b6917cae63a27   23h       2018-05-15T11:49:54+08:00   authentication,signing           system:bootstrappers:kubeadm:default-node-token
  1. 获取ca证书sha256编码hash值
[root@c6v196 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
7e628caad0d0d710ca6d95052263a1f54783f4bd053b68ef27298ecdeeac6243
  1. 节点加入集群
[root@c821v230 kubeadm-1.8.6]# kubeadm join --token 44afbf.d86b6917cae63a27  192.168.16.196:6443  --discovery-token-ca-cert-hash sha256:7e628caad0d0d710ca6d95052263a1f54783f4bd053b68ef27298ecdeeac6243
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.05.0-ce. Max validated version: 17.03
[preflight] WARNING: hostname "c821v230" could not be reached
[preflight] WARNING: hostname "c821v230" lookup c821v230 on 192.168.1.11:53: no such host
[discovery] Trying to connect to API Server "192.168.16.196:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.16.196:6443"
[discovery] Requesting info from "https://192.168.16.196:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.16.196:6443"
[discovery] Successfully established connection with API Server "192.168.16.196:6443"
[bootstrap] Detected server version: v1.8.6
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)


Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.


Run 'kubectl get nodes' on the master to see this machine join.

###错误

failed to check server version: Get https://192.168.21.194:6443/version: x509: certificate has expired or is not yet valid

原因是master和slave的时间不同步,用ntp 同步一下时间

join x509: certificate has expired or is not yet valid

如何从集群中移除Node

如果需要从集群中移除node2这个Node执行下面的命令:

在master节点上执行:

kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2

在node2上执行:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

kubeadm to try to setup a dev master 报错 systemctl status kubelet

 root@Kylin:/home/kylin/cetc28/cetc28-k8s-arm64/ansible-kubeadm# systemctl status kubele
● kubele.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)
root@Kylin:/home/kylin/cetc28/cetc28-k8s-arm64/ansible-kubeadm# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 四 2018-10-25 19:03:16 CST; 5s ago
     Docs: http://kubernetes.io/docs/
  Process: 3959 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, s
 Main PID: 3959 (code=exited, status=1/FAILURE)
    Tasks: 0 (limit: 512)
   Memory: 0B
      CPU: 0
   CGroup: /system.slice/kubelet.service

 通过在systemd脚本中设置--fail-swap-on = false来解决问题。只需对文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf进行修改即可

 

Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"

 然后运行systemctl daemon-reload,然后运行systemctl restart kubelet

然后在执行:

root@Kylin:/home/kylin/cetc28/cetc28-k8s-arm64/ansible-kubeadm# kubeadm init --skip-preflight-checks --token 397304.bf9e74fc6d472c39 --token_ttl 0 --kubernetes-version v1.8.6 --pod-network-cidr 10.244.0.0/16 --service-cidr 10.10.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.6
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 29.066400 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kylin as master by adding a label and a taint
[markmaster] Master kylin tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 397304.bf9e74fc6d472c39
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 397304.bf9e74fc6d472c39 172.17.65.118:6443 --discovery-token-ca-cert-hash sha256:bdd91a2a8779d0f3086c98bed3c526c7e14933c96f1da8217adbc9d179a977cb

 

你可能感兴趣的:(docker,kubernetes,online)