上一篇文章介绍在Ubuntu16.04上使用kubeadm搭建kubernetes1.5.5环(http://blog.csdn.net/ximenghappy/article/details/68944706)http://
本文主要介绍在Ubuntu16.04上使用kubeadm搭建kubernetes1.6.1环境。kubeadm安装1.5.5和1.6.1还是有一些区别的,主要在初始化成功后,配置环境变量和安装网络flannel插件
上篇文章简单介绍了一下kubeadm,这篇就不在赘述了,言归正传开始安装。
我准备了两台Ubuntu 16.04虚拟机,一个作为master,一个作为node。
Kubeadm默认安装时,master node将不会参与Pod调度,不会承载work load,即不会有非核心组件的Pod在Master node上被创建出来。当然通过kubectl taint命令可以解除这一限制,不过这是后话了。
准备了两台Ubuntu 16.04虚拟机,集群拓扑参数如下:
节点名 IP地址 CPU 内存
ubuntu-01 192.168.11.74 2核 4GB
ubuntu-02 192.168.11.75 2核 4GB
安装内容包括:
以下命令,在两个Node上均要执行。
添加apt-key
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
添加Kubernetes源
cat </etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
更新包信息
apt-get update
安装docker
说明:此篇文档安装docker是集成与kubernetes源中的。如果需要提前安装或者单独安装docker 请参考:
https://docs.docker.com/engine/installation/linux/ubuntu/#install-docker
apt-get install -y docker.io
#如果已经安装了docker这步可以跳过。
安装Kubernetes核心组件
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
# 此处的Kubernetes的核心组件,包括kubelet、kubeadm、kubectl和kubernetes-cni等
下载后的kube组件会自动运行起来。在 /lib/systemd/system下面我们能看到kubelet.service:
# cat /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
注意:以上命令所有节点都需要执行。
理论上通过kubeadm使用init和join命令即可建立一个集群,这init就是在master节点对集群进行初始化。和k8s 之前的部署方式不同的是kubeadm安装的k8s核心组件都是以容器的形式运行于master node上的。所以需要一些镜像。
由于墙的原因无法直接访问到Google的软件仓库(packages.cloud.google.com)和容器仓库(gcr.io),解决方法有两种:一是直接配置的Hosts文件,二是使用三方源或转存的容器仓库。这里使用修改Hosts文件的方法。
把下面几行内容加入到hosts文件中
# vim /etc/hosts
61.91.161.217 gcr.io
61.91.161.217 www.gcr.io
61.91.161.217 packages.cloud.google.com
最新可用的Google hosts文件可在这里获取:https://github.com/racaljk/hosts
kubeadm要从gcr.io/google_containers repository中pull许多核心组件的images,大约有如下一些:
gcr.io/google_containers/kube-proxy-amd64 v1.6.1 b56ed0c89180 9 days ago 109.2 MB
gcr.io/google_containers/kube-apiserver-amd64 v1.6.1 1f685ed29076 9 days ago 150.5 MB
gcr.io/google_containers/kube-scheduler-amd64 v1.6.1 acfe393e96ba 9 days ago 76.75 MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.6.1 591d6604f79b 9 days ago 132.7 MB
gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.1 fc5e302d8309 6 weeks ago 44.52 MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1 f8363dbf447b 6 weeks ago 52.36 MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.1 1091847716ec 6 weeks ago 44.85 MB
gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 6 weeks ago 168.9 MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 11 months ago 746.9 kB
在Kubeadm的文档中,Pod Network的安装是作为一个单独的步骤的。kubeadm init并没有选择一个默认的Pod network进行安装。这里采用Flannel 作为Pod network,如果我们要使用Flannel,那么在执行init时,按照kubeadm文档要求,我们必须给init命令带上option:–pod-network-cidr=10.244.0.0/16。如果有多网卡的,可以根据实际情况配置–api-advertise-addresses=,单网卡情况可以省略。多网卡的并没有验证过。
使用kubeadm init初始化kubernetes master。
注意kubead init命令指定kubernetes版本是有区别的。
kubeadm1.5执行命令:
kubeadm init --use-kubernetes-version=v1.5.5 --pod-network-cidr=10.244.0.0/16
kubeadm1.6.1执行命令:
kubeadm init --kubernetes-version=v1.6.1 --pod-network-cidr=10.244.0.0/16
执行kubeadm init命令:
# kubeadm init --kubernetes-version=v1.6.1 --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.6.1
[tokens] Generated token: "3c43f1.9a42d02deda012ef"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 814.549686 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 4.502786 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 220.503078 seconds
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 724f8f.8185e58d36209e97 192.168.11.74:6443
注意:通过上面kubeadm init 的提示信息需要执行一下命令::
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
如果不执行以上命令,执行以下错误信息:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
原因由于1.6.1默认安全模式启动,默认端口为6443,通过kubeadm init 的提示信息kubeadm join --token 724f8f.8185e58d36209e97 192.168.11.74:6443
,就能看出,这也是1.5.5与1.6.1的区别。
初始化成功后并且执行完以上命令,检查Kubernetes的核心组件均正常启动。
一种方式通过进程方式启动的,一种通过容器方式启动的。
进程方式启动,通过ps -ef 命令:
#ps -ef|grep kube
root 1979 1962 1 Apr12 ? 00:16:08 kube-apiserver --service-account-key-file=/etc/kubernetes/pki/sa.pub --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --secure-port=6443 --insecure-port=0 --allow-privileged=true --requestheader-username-headers=X-Remote-User --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --experimental-bootstrap-token-auth=true --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --service-cluster-ip-range=10.96.0.0/12 --client-ca-file=/etc/kubernetes/pki/ca.crt --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds --storage-backend=etcd3 --requestheader-allowed-names=front-proxy-client --authorization-mode=RBAC --advertise-address=192.168.111.107 --etcd-servers=http://127.0.0.1:2379
root 2165 2150 0 Apr12 ? 00:03:08 /usr/local/bin/kube-proxy --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf --cluster-cidr=10.244.0.0/16
root 3547 1 3 Apr12 ? 00:37:55 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt
root 4178 4162 0 Apr12 ? 00:00:49 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root 4216 4200 0 Apr12 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done
root 4446 4430 0 Apr12 ? 00:01:01 /kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2
nobody 4557 4539 0 Apr12 ? 00:01:54 /sidecar --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
root 9123 9108 0 Apr12 ? 00:02:38 kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.conf --address=127.0.0.1 --leader-elect=true
root 9162 9146 0 Apr12 ? 00:04:03 kube-controller-manager --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --leader-elect=true --insecure-experimental-approve-all-kubelet-csrs-for-group=system:bootstrappers --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --address=127.0.0.1 --use-service-account-credentials=true --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 --horizontal-pod-autoscaler-use-rest-clients=true
以容器方式启动的各个组件,通过docker ps命令查看
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
40c4a99a3d44 591d6604f79b "kube-controller-mana" 11 hours ago Up 11 hours k8s_kube-controller-manager_kube-controller-manager-liyun-3_kube-system_c3d48064499480c54e1c07442fca300e_1
b640859fad98 acfe393e96ba "kube-scheduler --kub" 11 hours ago Up 11 hours k8s_kube-scheduler_kube-scheduler-liyun-3_kube-system_af6197bc2561f80493b05379e887d202_1
de83819235ea gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:d33a91a5d65c223f410891001cd379ac734d036429e033865d700a4176e944b0 "/sidecar --v=2 --log" 18 hours ago Up 18 hours k8s_sidecar_kube-dns-3913472980-7gp5g_kube-system_8d28e46d-1f6d-11e7-9000-fa163eb0d754_0
e0fa5a6a2845 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:89c9a1d3cfbf370a9c1a949f39f92c1dc2dbe8c3e6cc1802b7f2b48e4dfe9a9e "/dnsmasq-nanny -v=2 " 18 hours ago Up 18 hours k8s_dnsmasq_kube-dns-3913472980-7gp5g_kube-system_8d28e46d-1f6d-11e7-9000-fa163eb0d754_0
7038db7ca372 gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:33914315e600dfb756e550828307dfa2b21fb6db24fe3fe495e33d1022f9245d "/kube-dns --domain=c" 18 hours ago Up 18 hours k8s_kubedns_kube-dns-3913472980-7gp5g_kube-system_8d28e46d-1f6d-11e7-9000-fa163eb0d754_0
0d3ef847fe3f gcr.io/google_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s_POD_kube-dns-3913472980-7gp5g_kube-system_8d28e46d-1f6d-11e7-9000-fa163eb0d754_0
3fa24c4a1a23 63cee19df39c "/bin/sh -c 'set -e -" 18 hours ago Up
...
...
不过这些核心组件并不是跑在pod network中的(没错,此时的pod network还没有创建),而是采用了host network。
通过命令:kubectl get pod –all-namespaces -o wide
# kubectl get pod --all-namespaces -o wide
kube-system etcd-ubuntu-01 1/1 Running 0 18h 192.168.111.107 ubuntu-01
kube-system kube-apiserver-ubuntu-01 1/1 Running 0 18h 192.168.11.74 ubuntu-01
kube-system kube-controller-manager-ubuntu-01 1/1 Running 1 18h 192.168.11.74 ubuntu-01
kube-system kube-dns-3913472980-7gp5g 0/3 ContainerCreating 0 18h 10.244.0.11 liyun-3
kube-system kube-proxy-g37kc 1/1 Running 0 18h 192.168.11.74 ubuntu-01
kube-system kube-proxy-jq099 1/1 Running 0 18h 192.168.11.74 ubuntu-01
kube-system kube-scheduler-ubuntu-01 1/1 Running 1 18h 192.168.11.74 ubuntu-01
...
...
初始化集群后,查看一下组件状态,会发现异常
通过命令kubectl get pod –all-namespaces -o wide,你也会发现kube-dns pod处于ContainerCreating状态。查看dns日志会发现一些错误的信息:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
22s 22s 1 {default-scheduler } Normal Scheduled Successfully assigned kube-dns-2924299975-0h6kt to ubuntu-01
16s 9s 2 {kubelet ubuntu-01} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-2924299975-0h6kt_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-2924299975-0h6kt_kube-system(725cc36f-14ec-11e7-bf45-fa163eeee269)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"
这种错误是由于没有装网络组件造成的,所以我们要安装网络组件。
Kubernetes一共提供五种网络组件,可以根据自己的需要选择。我使用的Flannel网络,此处1.5.5和1.6.1也是不一样的,1.6.1加了RBAC。需要执行一下两个命令:
kubectl create -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel-rbac.yml
clusterrole "flannel" configured
clusterrolebinding "flannel" configured
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
也可以将flannel从git上下载到本地。执行安装命令:
kubectl create -f /root/flannel/Documentation/kube-flannel-rbac.yml
kubectl create -f /root/flannel/Documentation/kube-flannel.yml
如果不执行rbac.yaml安装flannel会失败,出错信息如下:
E0404 08:42:23.017527 1 main.go:127] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-21rx2': the server does not allow access to the requested resource (get pods kube-flannel-ds-wlr92)
解决此问题参考:
https://github.com/kubernetes/kubernetes/issues/44029
https://github.com/kubernetes/kubeadm/issues/212#issuecomment-290908868
安装flannel需要下载flannel镜像,安装过程需要一定的时间。
稍等片刻,我们再来看master node上的cluster信息:
# ps aux|grep kube|grep flannel
root 4178 0.0 0.8 408932 32548 ? Ssl Apr12 0:49 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root 4216 0.0 0.0 8096 1792 ? Ss Apr12 0:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done
检查各节点组件运行状态,集群的核心组件已经全部run起来了:
kube-system etcd-ubuntu-01 1/1 Running 0 18h 192.168.111.107 ubuntu-01
kube-system kube-apiserver-ubuntu-01 1/1 Running 0 18h 192.168.11.74 ubuntu-01
kube-system kube-controller-manager-ubuntu-01 1/1 Running 1 18h 192.168.11.74 ubuntu-01
kube-system kube-dns-3913472980-7gp5g 0/3 ContainerCreating 0 18h 10.244.0.11 liyun-3
kube-system kube-proxy-g37kc 1/1 Running 0 18h 192.168.11.74 ubuntu-01
kube-system kube-proxy-jq099 1/1 Running 0 18h 192.168.11.74 ubuntu-01
kube-system kube-scheduler-ubuntu-01 1/1 Running 1 18h 192.168.11.74 ubuntu-01
测试DNS
# kubectl run curl --image=radial/busyboxplus:curl -i --tty
Waiting for pod default/curl-2421989462-3xx4j to be running, status is Pending, pod ready: false
Waiting for pod default/curl-2421989462-3xx4j to be running, status is Pending, pod ready: false
Waiting for pod default/curl-2421989462-3xx4j to be running, status is Pending, pod ready: false
Waiting for pod default/curl-2421989462-3xx4j to be running, status is Pending, pod ready: false
[ root@curl-2421989462-3xx4j:/ ]$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
# kubectl delete deploy curl
到这一步就可以向集群中发布微服务了,同时可以使用kubeadm join命令添加新的Node到集群中
将node加入cluster了。这里使用:kubeadm join。在执行此命令时需要执行3.1.1步骤的命令。
在node上执行(注意:这里要保证token是正确的从init的日志里可以获取到,),根据Master上初始化成功后会打印的token:
# kubeadm join --token=724f8f.8185e58d36209e97 192.168.11.74:6443
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
在node上看到的k8s组件情况如下:
# # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4e2039405fb8 63cee19df39c "/bin/sh -c 'set -e -" 18 hours ago Up 18 hours k8s_install-cni_kube-flannel-ds-cglqb_kube-system_65acef63-1f70-11e7-9000-fa163eb0d754_0
22d019688880 gcr.io/google_containers/kube-proxy-amd64@sha256:243f2120171330a26c2418a4367fb0f3cc3e92683b00d16e3cf8c7f92e25bf14 "/usr/local/bin/kube-" 18 hours ago Up 18 hours k8s_kube-proxy_kube-proxy-jq099_kube-system_65ad1853-1f70-11e7-9000-fa163eb0d754_0
a2d290a9d113 63cee19df39c "/opt/bin/flanneld --" 18 hours ago Up 18 hours k8s_kube-flannel_kube-flannel-ds-cglqb_kube-system_65acef63-1f70-11e7-9000-fa163eb0d754_0
a02ab7b54622 gcr.io/google_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s_POD_kube-proxy-jq099_kube-system_65ad1853-1f70-11e7-9000-fa163eb0d754_0
e2faebbaa4ec gcr.io/google_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s_POD_kube-flannel-ds-cglqb_kube-system_65acef63-1f70-11e7-9000-fa163eb0d754_0
现在集群搭建完成了,验证集群状态,在master node上查看当前cluster状态:
# kubectl get nodes
NAME STATUS AGE
ubuntu01 Ready,master 22h
ubuntu02 Ready 22h
出现以上状态,表面础环境就搭好了,可以后续继续验证是否真正成功,可以尝试部署服务等
上面步骤已经部署完成了一套kubernetes环境,下面部署一个官方提供的电商微服务应用。
# kubectl create namespace sock-shop
# kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"
然后查看分配给样例应用的服务信息:
# kubectl describe svc front-end -n sock-shop
Name: front-end
Namespace: sock-shop
Labels: name=front-end
Selector: name=front-end
Type: NodePort
IP: 10.97.162.95
Port: 80/TCP
NodePort: 30001/TCP
Endpoints: 10.244.1.7:8079
Session Affinity: None
No events.
经过几分钟,会下载所需镜像病启动样例应用所需容器,然后就可以通过命令kubectl get pods -n sock-shop的输出查看样例应用POD详细信息
# kubectl get pods -n sock-shop
NAME READY STATUS RESTARTS AGE
carts-2925342755-hvwcc 1/1 Running 0 22h
carts-db-2797325130-vst02 1/1 Running 0 22h
catalogue-1279937814-tplnb 1/1 Running 0 22h
catalogue-db-2290683463-6bf7p 1/1 Running 0 22h
front-end-48666118-gk1qc 1/1 Running 0 22h
orders-2584752504-9qkm9 1/1 Running 0 22h
orders-db-3277638702-stcmc 1/1 Running 0 22h
payment-2411285232-zhq9v 1/1 Running 0 22h
queue-master-1271843919-wn0d5 1/1 Running 0 22h
rabbitmq-3472039365-5gftt 1/1 Running 0 22h
shipping-3204723698-jznnj 1/1 Running 0 22h
user-619013510-jgt9h 1/1 Running 0 22h
user-db-431019311-30nl4 1/1 Running 0 22h
进入到Kubernetes集群的master节点上,通过浏览器访问http://:。在上面这个例子中,端口是30001,这个端口可以通过命令kubectl describe查询出来。
访问http://:会出现以下页面:
如果有防火墙,那么确保这个端口可以被访问到。
Kubeadm默认安装时,master node将不会参与Pod调度,不会承载work load,即不会有非核心组件的Pod在Master node上被创建出来。当然通过kubectl taint命令可以解除这一限制
#kubectl taint nodes ubuntu-01 dedicated-
node "ubuntu-01" tainted
需要将master节点上面的kubernetes配置文件拷贝到当前节点上,然后执行kubectl命令:
# scp root@:/etc/kubernetes/admin.conf .
# kubectl --kubeconfig ./admin.conf get nodes
参照问题总结的2和3.
# kubectl drain --delete-local-data --force --ignore-daemonsets
# kubectl delete node
3 kubeadm会自动检查当前环境是否有上次命令执行的“残留”。如果有,必须清理后再行执行init。我们可以通过”kubeadm reset”来清理环境,以备重来。
# kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf]
4 初始化集群如果想要指定kubernetes版本,可使用一下指令:(,如果不指定使用最新版本)
kubeadm init --kubernetes-version=v1.6.1 --pod-network-cidr=10.244.0.0/16
5 如果想要查看容器详情可以通过。可使用如下指令:
$ kubectl describe 指令
$ kubectl logs 指令
6 获取加入node节点的token。可使用如下指令:kubeadm token list
# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION
be7644.3ba2439ae9ba6529 <forever> authentication,signing The default bootstrap token generated by 'kubeadm init'.
参考文档:
Installing Kubernetes on Linux with kubeadm主要参照这篇官方文档。