Kubernetes 1.12 集群环境安装

准备3台服务器

配置 hosts

/etc/hosts

192.168.3.26 kubemaster
192.168.3.27 kube2
192.168.3.29 kube3

关闭防火墙

// centos 下
systemctl stop firewalld.service
service   iptables stop

关闭 SELinux

setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

关闭 swap

swapoff -a

打开 /etc/fstab,注释掉这行:

# /dev/mapper/centos-swap swap swap defaults 0 0

开启 br_netfilter

modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

安装 docker

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce-18.03.0.ce-1.el7.centos

安装 Kubernetes

配置yum仓库:

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装:

yum install -y kubelet kubeadm kubectl

重启

sudo reboot

启动 docker 和 Kubernetes

systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet

变更 Cgroup

sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl daemon-reload && systemctl restart kubelet

初始化 Kubernetes 集群

/etc/sysconfig/kubelet 加入:

KUBELET_EXTRA_ARGS=--fail-swap-on=false

在 master 上执行:

kubeadm init --apiserver-advertise-address=192.168.3.23 --pod-network-cidr=10.244.0.0/16
  • --apiserver-advertise-address :指定 Kubernetes 广播通知其 API server。

  • --pod-network-cidr :指定 pod 网络IP范围。

这个命令会下载需要的docker镜像,如果当前网络无法访问google,就会报错,提示一下镜像下载失败:

k8s.gcr.io/kube-apiserver:v1.12.2
k8s.gcr.io/kube-controller-manager:v1.12.2
k8s.gcr.io/kube-scheduler:v1.12.2
k8s.gcr.io/kube-proxy:v1.12.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2

解决的方法时自己从docker hub下载好这些镜像:

// 下载
docker pull mirrorgooglecontainers/kube-apiserver:v1.12.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.12.2
docker pull mirrorgooglecontainers/kube-proxy:v1.12.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.2

// 修改tag
docker tag mirrorgooglecontainers/kube-apiserver:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2 
docker tag mirrorgooglecontainers/kube-controller-manager:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2
docker tag mirrorgooglecontainers/kube-scheduler:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2
docker tag mirrorgooglecontainers/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2

// 删除多余的镜像
docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.2
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.2
docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.2
docker rmi mirrorgooglecontainers/kube-proxy:v1.12.2
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.2.24
docker rmi coredns/coredns:1.2.2

然后重新执行上面 kubeadm init 命令,先执行:

kubeadm reset

启动时可以新开一个窗口,执行下面命令来查看启动过程,如果报错可以看到具体信息:

journalctl -f -u kubelet.service

启动成功后会提示类似如下的信息:

// 需要记下来,后面 worker node 加入时使用提示的这个命令
kubeadm join 192.168.3.26:6443 --token mzjlct.y839akfzj27atbfc --discovery-token-ca-cert-hash sha256:b031db4760cd7da03965f0cd3db5c63602a61c729b1377639616c77b2cef4f73

配置:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

查看 node 和 pod :

kubectl get nodes

// 输出
NAME         STATUS     ROLES    AGE     VERSION
kubemaster   NotReady   master   9m14s   v1.12.2

kubectl get pods --all-namespaces

// 输出
NAMESPACE     NAME                                 READY   STATUS     RESTARTS   AGE
kube-system   coredns-576cbf47c7-7tx6v             0/1     Pending    0          9m3s
kube-system   coredns-576cbf47c7-sqlmd             0/1     Pending    0          9m3s
kube-system   etcd-kubemaster                      0/1     Pending    0          1s
kube-system   kube-controller-manager-kubemaster   0/1     Pending    0          0s
kube-system   kube-flannel-ds-amd64-8zlf5          0/1     Init:0/1   0          13s
kube-system   kube-proxy-4v99p                     1/1     Running    0          9m3s
kube-system   kube-scheduler-kubemaster            0/1     Pending    0          0s

worker 节点加入集群

在2个kube2和kube3上执行加入集群的命令:

kubeadm join 192.168.3.26:6443 --token mzjlct.y839akfzj27atbfc --discovery-token-ca-cert-hash sha256:b031db4760cd7da03965f0cd3db5c63602a61c729b1377639616c77b2cef4f73

如果报错:

[discovery] Failed to request cluster info, will try again: [Get https://192.168.3.81:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.3.81:6443: connect: no route to host]

说明网络访问有问题,可以关掉防火墙后重试。

在master上再次查看 node :

kubectl get nodes
// 输出
NAME         STATUS   ROLES    AGE     VERSION
kube2        Ready       5m20s   v1.12.2
kube3        Ready       37m     v1.12.2
kubemaster   Ready    master   75m     v1.12.2

查看集群状态:

kubectl get cs

NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}

STATUSReady 说明成功。

worker node 加入集群还可以通过下面的方法:

// master 中获取 token
$ kubeadm token list

// worker 中执行 join 命令
$ kubeadm join --discovery-token-unsafe-skip-ca-verification --token=102952.1a7dd4cc8d1f4cc5 172.17.0.56:6443

你可能感兴趣的:(Kubernetes 1.12 集群环境安装)