master上通过kubeadm安装Kubernetes
添加国内阿里源后安装kubeadm:
1 deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
2 apt-get update && apt-get install kubeadm
创建kubeadm.yaml文件, 然后执行安装:
1 apiVersion: kubeadm.k8s.io/v1alpha2 2 kind: MasterConfiguration 3 controllerManagerExtraArgs: 4 horizontal-pod-autoscaler-use-rest-clients: "true" 5 horizontal-pod-autoscaler-sync-period: "10s" 6 node-monitor-grace-period: "10s" 7 apiServerExtraArgs: 8 runtime-config: "api/all=true" 9 kubernetesVersion: "stable-1.12.2"
1 kubeadm init --config kubeadm.yaml
安装过程中出现的问题:
1 [ERROR Swap]: running with swap on is not supported. Please disable swap 2 [ERROR SystemVerification]: missing cgroups: memory 3 [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.12.2]
解决办法:
1. 报错很直白, 禁用swap分区即可.
不过不建议使用: swapoff -a 从操作记录来看, 使用swapoff -a后kubeadm init命令虽然可以执行,但是却总是失败, 提示: [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. 查看日志发现实际还是swap没有关闭的问题: ➜ kubernetes journalctl -xefu kubelet 11月 05 22:56:28 debian kubelet[7241]: F1105 22:56:28.609272 7241 server.go:262] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority /dev/sda9 partition 3905532 0 -1] ➜ kubernetes cat /proc/swaps Filename Type Size Used Priority /dev/sda9 partition 3905532 0 -1 ➜ kubernetes 注释掉/etc/fstab下swap挂载后安装成功
2. echo GRUB_CMDLINE_LINUX=\"cgroup_enable=memory\" >> /etc/default/grub && update-grub && reboot
3. 国内正常网络不能从k8s.grc.io拉取镜像, 所以从docker.io拉取, 然后重新打上一个符合k8s的tag: docker pull mirrorgooglecontainers/kube-apiserver:v1.12.2 docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.2 docker pull mirrorgooglecontainers/kube-scheduler:v1.12.2 docker pull mirrorgooglecontainers/kube-proxy:v1.12.2 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.2 docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2 docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2 docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2 docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2 docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag docker.io/coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2 docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.2 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.2 docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.2 docker rmi mirrorgooglecontainers/kube-proxy:v1.12.2 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.2 也可以增加加速器(测试163后速度比直接访问更慢), 加入方法如下,然后重启docker服务: ➜ kubernetes cat /etc/docker/daemon.json { "registry-mirrors": ["http://hub-mirror.c.163.com"] } ➜ kubernetes
安装成功记录:
➜ kubernetes kubeadm init --config kubeadm.yaml
I1205 23:08:15.852917 5188 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.12.2.txt": Get https://dl.k8s.io/release/stable-1.12.2.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 23:08:15.853144 5188 version.go:94] falling back to the local client version: v1.12.2
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [debian localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [debian localhost] and IPs [192.168.2.118 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [debian kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.118] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 48.078220 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node debian as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node debian as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "debian" as an annotation [bootstraptoken] using token: x4p0vz.tdp1xxxx7uyerrrs [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.2.118:6443 --token x4p0vz.tdp1xxxx7uyerrrs --discovery-token-ca-cert-hash sha256:64cb13f7f004fe8dd3e6d0e246950f4cbdfa65e2a84f8988c3070abf8183b3e9 ➜ kubernetes
部署网络插件
安装成功后, 通过kubectl get nodes查看节点信息(kubectl命令需要使用kubernetes-admin来运行, 需要拷贝下配置文件并配置环境变量才能运行kubectl get nods):
➜ kubernetes kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port? ➜ kubernetes echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bashrc ➜ kubernetes source ~/.bashrc ➜ kubernetes kubectl get nodes NAME STATUS ROLES AGE VERSION debian NotReady master 21m v1.12.2 ➜ kubernetes
可以看到节点NotReady, 这是由于还没有部署任何网络插件:
➜ kubernetes kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-576cbf47c7-4vjhf 0/1 Pending 0 24m coredns-576cbf47c7-xzjk7 0/1 Pending 0 24m etcd-debian 1/1 Running 0 23m kube-apiserver-debian 1/1 Running 0 23m kube-controller-manager-debian 1/1 Running 0 23m kube-proxy-5wb6k 1/1 Running 0 24m kube-scheduler-debian 1/1 Running 0 23m ➜ kubernetes ➜ kubernetes kubectl describe node debian Name: debian Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=debian node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 05 Dec 2018 23:09:19 +0800 Taints: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 05 Dec 2018 23:31:26 +0800 Wed, 05 Dec 2018 23:09:14 +0800 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 05 Dec 2018 23:31:26 +0800 Wed, 05 Dec 2018 23:09:14 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 05 Dec 2018 23:31:26 +0800 Wed, 05 Dec 2018 23:09:14 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 05 Dec 2018 23:31:26 +0800 Wed, 05 Dec 2018 23:09:14 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Wed, 05 Dec 2018 23:31:26 +0800 Wed, 05 Dec 2018 23:09:14 +0800 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized. WARNING: CPU hardcapping unsupported Addresses: InternalIP: 192.168.2.118 Hostname: debian Capacity: cpu: 2 ephemeral-storage: 4673664Ki hugepages-2Mi: 0 memory: 5716924Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 4307248736 hugepages-2Mi: 0 memory: 5614524Ki pods: 110 System Info: Machine ID: 4341bb45c5c84ad2827c173480039b5c System UUID: 05F887C4-A455-122E-8B14-8C736EA3DBDB Boot ID: ff68f27b-fba0-4048-a1cf-796dd013e025 Kernel Version: 3.16.0-4-amd64 OS Image: Debian GNU/Linux 8 (jessie) Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.6.1 Kubelet Version: v1.12.2 Kube-Proxy Version: v1.12.2 Non-terminated Pods: (5 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system etcd-debian 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-apiserver-debian 250m (12%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-controller-manager-debian 200m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-proxy-5wb6k 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-scheduler-debian 100m (5%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 550m (27%) 0 (0%) memory 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 22m kubelet, debian Starting kubelet. Normal NodeAllocatableEnforced 22m kubelet, debian Updated Node Allocatable limit across pods Normal NodeHasSufficientDisk 22m (x6 over 22m) kubelet, debian Node debian status is now: NodeHasSufficientDisk Normal NodeHasSufficientMemory 22m (x6 over 22m) kubelet, debian Node debian status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 22m (x6 over 22m) kubelet, debian Node debian status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 22m (x5 over 22m) kubelet, debian Node debian status is now: NodeHasSufficientPID Normal Starting 21m kube-proxy, debian Starting kube-proxy. ➜ kubernetes
部署插件后可查看所有pods已经running(插件要几分钟才能运行起来, 中间状态有ContainerCreating/CrashLoopBackOff):
➜ kubernetes kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-576cbf47c7-4vjhf 0/1 Pending 0 25m coredns-576cbf47c7-xzjk7 0/1 Pending 0 25m etcd-debian 1/1 Running 0 25m kube-apiserver-debian 1/1 Running 0 25m kube-controller-manager-debian 1/1 Running 0 25m kube-proxy-5wb6k 1/1 Running 0 25m kube-scheduler-debian 1/1 Running 0 25m weave-net-nj7bk 0/2 ContainerCreating 0 21s ➜ kubernetes kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-576cbf47c7-4vjhf 0/1 CrashLoopBackOff 2 27m coredns-576cbf47c7-xzjk7 0/1 CrashLoopBackOff 2 27m etcd-debian 1/1 Running 0 27m kube-apiserver-debian 1/1 Running 0 27m kube-controller-manager-debian 1/1 Running 0 27m kube-proxy-5wb6k 1/1 Running 0 27m kube-scheduler-debian 1/1 Running 0 27m weave-net-nj7bk 2/2 Running 0 2m32s ➜ kubernetes kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-576cbf47c7-4vjhf 1/1 Running 3 27m coredns-576cbf47c7-xzjk7 1/1 Running 3 27m etcd-debian 1/1 Running 0 27m kube-apiserver-debian 1/1 Running 0 27m kube-controller-manager-debian 1/1 Running 0 27m kube-proxy-5wb6k 1/1 Running 0 27m kube-scheduler-debian 1/1 Running 0 27m weave-net-nj7bk 2/2 Running 0 2m42s ➜ kubernetes
➜ kubernetes kubectl get nodes NAME STATUS ROLES AGE VERSION debian Ready master 38m v1.12.2 ➜ kubernetes
调整master可以执行Pod
默认情况下,Kubernetes通过Taint/Toleration 机制给某一个节点打上"污点":
➜ kubernetes kubectl describe node debian | grep Taints Taints: node-role.kubernetes.io/master:NoSchedule ➜ kubernetes
那么所有Pod默认就不在被标记的节点上运行,除非:
1 1. Pod主动声明允许在这种节点上运行(通过在Pod的yaml文件中的spec部分,加入 tolerations 字段即可)。 2 2. 对于总共就几台机器的k8s测试机器来说,最好的选择就是删除Taint: 3 ➜ kubernetes kubectl taint nodes --all node-role.kubernetes.io/master- 4 node/debian untainted 5 ➜ kubernetes kubectl describe node debian | grep Taints 6 Taints:7 ➜ kubernetes
增加节点
由于master上kubeadm/kubelet都是v1.12.2版本,worker节点执行默认apt-get install时默认装了v1.13版本,导致加入集群失败,需卸载重装匹配的版本:
root@debian-vm:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:02:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
root@debian-vm:~# root@debian-vm:~# kubelet --version
Kubernetes v1.13.0
root@debian-vm:~# apt-get --purge remove kubeadm kubelet
root@debian-vm:~# apt-cache policy kubeadm
kubeadm:
已安装: (无)
候选软件包:1.13.0-00
版本列表:
1.13.0-00 0
500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
1.12.3-00 0
500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
1.12.2-00 0
500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
root@debian-vm:~# apt-get install kubeadm=1.12.2-00 kubelet=1.12.2-00
root@debian-vm:~# kubeadm join 192.168.2.118:6443 --token x4p0vz.tdp1xxxx7uyerrrs --discovery-token-ca-cert-hash sha256:64cb13f7f004fe8dd3e6d0e246950f4cbdfa65e2a84f8988c3070abf8183b3e9 [preflight] running pre-flight checks [discovery] Trying to connect to API Server "192.168.2.118:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.2.118:6443" [discovery] Requesting info from "https://192.168.2.118:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.2.118:6443" [discovery] Successfully established connection with API Server "192.168.2.118:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "debian-vm" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. root@debian-vm:~#
https://github.com/kubernetes/kubernetes/issues/54914 https://github.com/kubernetes/kubeadm/issues/610 https://blog.csdn.net/acxlm/article/details/79069468