环境信息
master节点:192.168.72.3
node节点:192.168.72.4
版本信息
kubernetes v1.10.0 (包括kubectl、kubelet)
kubeadm v1.10.0
设置ipv4转发:
vim /etc/sysctl.d/k8s.conf:增加一行 net.ipv4.ip_forward = 1,如果没有这个目录文件,可手动新建
sysctl -p /etc/sysctl.d/k8s.conf
centos7下net-bridge设置:
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
master节点
************************master节点*************************** ------------------------获取镜像--------------------------- docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-apiserver-amd64:v1.10.0 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-controller-manager-amd64:v1.10.0 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-scheduler-amd64:v1.10.0 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-proxy-amd64:v1.10.0 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-kube-dns-amd64:1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-sidecar-amd64:1.14.8 docker pull registry.cn-beijing.aliyuncs.com/zhoujun/etcd-amd64:3.1.12 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/flannel:v0.10.0-amd64 docker pull registry.cn-beijing.aliyuncs.com/zhoujun/pause-amd64:3.1 ------------------------修改版本--------------------------- docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 docker tag registry.cn-beijing.aliyuncs.com/zhoujun/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag registry.cn-beijing.aliyuncs.com/zhoujun/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 ------------------------删除多余镜像--------------------------- docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-apiserver-amd64:v1.10.0 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-controller-manager-amd64:v1.10.0 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-scheduler-amd64:v1.10.0 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-proxy-amd64:v1.10.0 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-kube-dns-amd64:1.14.8 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-sidecar-amd64:1.14.8 docker rmi registry.cn-beijing.aliyuncs.com/zhoujun/etcd-amd64:3.1.12 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/flannel:v0.10.0-amd64 docker rmi registry.cn-beijing.aliyuncs.com/zhoujun/pause-amd64:3.1
node节点
*********************node节点*************************************** ------------------------获取镜像--------------------------- docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-proxy-amd64:v1.10.0 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/flannel:v0.10.0-amd64 docker pull registry.cn-beijing.aliyuncs.com/zhoujun/pause-amd64:3.1 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/kubernetes-dashboard-amd64:v1.8.3 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/heapster-influxdb-amd64:v1.3.3 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/heapster-grafana-amd64:v4.4.3 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/heapster-amd64:v1.4.2 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-kube-dns-amd64:1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-sidecar-amd64:1.14.8 ------------------------修改镜像--------------------------- docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag registry.cn-beijing.aliyuncs.com/zhoujun/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 ------------------------删除镜像--------------------------- docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/flannel:v0.10.0-amd64 docker rmi registry.cn-beijing.aliyuncs.com/zhoujun/pause-amd64:3.1 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/kube-proxy-amd64:v1.10.0 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/kubernetes-dashboard-amd64:v1.8.3 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/heapster-influxdb-amd64:v1.3.3 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/heapster-grafana-amd64:v4.4.3 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/heapster-amd64:v1.4.2 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-kube-dns-amd64:1.14.8 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker rmi registry.cn-hangzhou.aliyuncs.com/zjymxc/k8s-dns-sidecar-amd64:1.14.8
安装docker
yum install docker-ce,如果指定了版本:yum install docker-ce-17.06.0.ce进行安装,如果不可联网可去下载rpm包或者去github上下载二进制文件执行。
安装kubeadm、kubectl、kubelet
注意这里的安装顺序。如果在卸载kubelet之后,会默认卸载的kubeadm。
yum install -y kubelet-1.10.0
yum install -y kubeadm-1.10.0
yum install -y kubectl-1.10.0
这里有个坑:网上或有些书上说还要安装kubernetes-cni工具,其实是不需要的,这个工具是对于老版本的k8s,目前的新版本是不需要再安装这个工具的。
配置kubelet
输入docker info | grep Cgroup ,
修改k8s配置文件:vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
安装了kubelet后会在这个文件夹下产生这个文件。
修改为cgroupfs,与docker匹配
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
关闭swap:
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
这里有个坑:
细心的话会发现安装了kubelet之后服务是正常启动的,但是修改了配置文件重启就会报错,错误信息如下:
server.go:218] unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory 解决办法: 根据github上的说法。 You have to initialize the kubelet config before it will start successfully. That is typically done with kubeadm init or kubeadm join 需要运行kubeadm init 产生配置文件后,再启动kubelet才能成功,因为kubelet的启动需要依赖ca相关的文件,而ca相关的证书文件是kubeadm初始化之后产生的,但是kubeadm初始化是不需要依赖kubelet的启动,所以和这个报错可以忽略。
查看镜像版本
当安装了kubeadm后会在 /etc/kubernetes/manifests文件下看到:etcd.yaml、kube-apiserver.yaml、kube-controller-manager.yaml、kube-scheduler.yaml几个yaml文件,文件中都规定了相关的镜像版本,kubeadm v1.10.0版本的相关镜像版本跟上面获取的镜像版本是一致的,如果需要配置其他版本的镜像可以修改这几个yaml文件。
kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.72.3
如果出现如下错误:
[ERROR Port-10250]: Port 10250 is in use :
通过:lsof -i:端口号、netstat -tunlp|grep 端口号,查看端口号,发现10250端口号是kubelet.service正在使用。
解决方案:发现杀死进程都没有用,最终重启一下kubeadm就可以了,如下: kubeadm reset。
这里有一个大坑:
一些网上和一些书上在初始化的时候直接使用kubeadm init或者kubeadm init --kubernetes-version=v1.10.0 ,但是这样是有错误的。 一、如果不设置--kubernetes-version=v1.10.0,当前会默认下载最新的k8s版本,这里需要k8s的版本必须跟kubeadm、kubelet的版本一致,如果版本不一致就会初始化失败,如果kubeadm、kubelet、k8s默认都是最新版本就可以不用设置。 二、初始化时要增加 --pod-network-cidr 10.244.0.0/16 参数;网段根据需要自己指定,如果不使用 --pod-network-cidr 参数,则 flannel pod 启动后会出现 failed to register network: failed to acquire lease: node "xxxxxx" pod cidr not assigned。
初始化kubeadm完成之后需要记住最后提示的信息,示例如下:
用于其他节点信息加入: kubeadm join 192.168.72.3:6443 --token n0dta8.jce8oam6164lrlif --discovery-token-ca-cert-hash sha256:41001c9fa866b5c8d29186b2e0a1d289b454c1984106700579d567d8214d1064
启动kubelet
systemctl start kubelet.service
如果启动失败可通过:journalctl -xe、journalctl -xeu kubelet查看失败日志信息。
复制config文件
kubeadm初始化成功后,需要将产生的admin.config复制到$HOME目录下面,具体操作如下:
sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf
k8s节点和pod查看
kubeadm初始化成功后,查看node节点信息和pods部署信息如下:
[root@master kubernetes]# kubectl get nodes NAME STATUS AGE VERSION master NotReady 1m v1.10.0 [root@master zhoujun]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-master 0/1 Panding 0 27m kube-system kube-apiserver-master 0/1 Panding 0 27m kube-system kube-controller-manager-master 0/1 Panding 0 27m kube-system kube-dns-86f4d74b45-rtgt2 0/3 Panding 0 28m kube-system kube-proxy-t4z9m 0/1 Panding 0 28m kube-system kube-scheduler-master 0/1 Panding 0 27m 可以看到节点还没有Ready,pod也没正常启动,还需要安装网络配置。
flannel网络组件启动
可从github上下载kube-flannel.yml:https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
然后kubectl create -f kube-flannel.yaml直接安装就行了。这里需要注意的是,网上有说需要先用kube-flannel-rbac.yml,但是最新的kube-flannel已经包含了rbac里面的内容了,所以不需要再次使用。再次按照上面的操作执行,node节点就已经可用,而且所有的pod已经全部启动成功。
安装插件
安装好kubelet、kubeadm、kubectl
获取ca认证
配置vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件,跟上面方法一样。
将$HOME/config文件复制到node节点上面的相同位置。
加入集群
kubeadm join 192.168.72.3:6443 --token n0dta8.jce8oam6164lrlif --discovery-token-ca-cert-hash sha256:41001c9fa866b5c8d29186b2e0a1d289b454c1984106700579d567d8214d1064 问题: [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [preflight] Some fatal errors occurred: [ERROR Port-10250]: Port 10250 is in use [ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=... 如果出现以上问题,不要慌,很简单。 解决方法: kubeadm reset 然后重新join就ok了。 看到这样的信息代表成功: [preflight] Starting the kubelet service [discovery] Trying to connect to API Server "192.168.72.3:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.72.3:6443" [discovery] Requesting info from "https://192.168.72.3:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.72.3:6443" [discovery] Successfully established connection with API Server "192.168.72.3:6443"
启动kubelet
最后启动kubelet服务,整个kubeadm部署完毕。
验证信息
刚刚加入成功稍等一两分钟。
[root@master kubernetes]# clear [root@master kubernetes]# kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 1h v1.10.0 node1 Ready12m v1.10.0 看到如上信息,代表整个过程全部安装成功。
剩下的dashboard组件安装与使用 和heapster监控组件安装与使用 就是选装了。