最新版本k8s集群搭建

安装说明

操作系统版本:

cat /proc/version
# Linux version 3.10.0-862.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) ) #1 SMP Fri Apr 20 16:44:24 UTC 2018
rpm -q centos-release
# centos-release-7-5.1804.el7.centos.x86_64
cat /etc/redhat-release
# CentOS Linux release 7.5.1804 (Core)

docker版本:

docker --version
# Docker version 18.09.6, build 481bc77156

kubernetes版本

kubelet --version
# Kubernetes v1.14.2

安装步骤

1.设置ssh使服务器之间互信

2.关闭 SeLinux 和 FireWall [所有机器执行]

sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld

swapoff -a

setenforce 0
vi /etc/selinux/config
SELINUX=disabled

3.安装组件 [所有机器执行]

(1) 安装 docker

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum -y install docker-ce
docker --version
# Docker version 17.06.2-ce, build cec0b72
systemctl start docker
systemctl status docker
systemctl enable docker

(2) 安装 kubelet、kubeadm、kubectl

设置仓库地址:

cat>>/etc/yum.repos.d/kubrenetes.repo<

执行命令马上安装

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX= disabled/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

4.安装镜像 [所有机器执行]

# 安装镜像
docker pull mirrorgooglecontainers/kube-apiserver:v1.14.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.14.2
docker pull mirrorgooglecontainers/kube-proxy:v1.14.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

# 取别名
docker tag mirrorgooglecontainers/kube-apiserver:v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.2 k8s.gcr.io/kube-controller-manager:v1.14.2
docker tag mirrorgooglecontainers/kube-scheduler:v1.14.2 k8s.gcr.io/kube-scheduler:v1.14.2
docker tag mirrorgooglecontainers/kube-proxy:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

# 删除镜像
docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.2
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.2
docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.2
docker rmi mirrorgooglecontainers/kube-proxy:v1.14.2
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.3.10
docker rmi coredns/coredns:1.3.1
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

5.安装Master

(1) 初始化

kubeadm init --kubernetes-version=v1.14.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

kubernetes-version: 当前k8s版本

pod-network-cidr: 用于指定Pod的网络范围。该参数使用依赖于使用的网络方案,本文将使用经典的flannel网络方案。

service-cidr:

如果没有问题, 则会得到输出结果: 输出结果

(2) 设置.kube/config

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

(3) 保存输出中kubeadm join行命令, 在node节点会执行

kubeadm join 10.255.73.26:6443 --token xfnfrl.4zlyx5ecu4t7n9ie \
    --discovery-token-ca-cert-hash sha256:c68bbf21a21439f8de92124337b4af04020f3332363e28522339933db813cc4b

(4) 配置 kubect

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
echo $KUBECONFIG

(5) 安装Pod网络

安装 Pod网络是 Pod之间进行通信的必要条件,k8s支持众多网络方案,这里我们依然选用经典的flannel方案

a.在任意位置新建文件 kube-flannel.yaml, 文件内容: 文件内容

b.首先设置系统参数 sysctl net.bridge.bridge-nf-call-iptables=1

c.使用 kube-flannel.yaml文件, kubectl apply -f kube-flannel.yaml

d.检查pod网络是否正常 kubectl get pods --all-namespaces -o wide, 如果READY都为1/1, 则正常

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE              NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-2hwr4                   1/1     Running   0          7h44m   10.244.0.3     pjr-ofckv-73-26              
kube-system   coredns-fb8b8dccf-nwqt9                   1/1     Running   0          7h44m   10.244.0.2     pjr-ofckv-73-26              

e.查看节点状态 kubectl get nodes

pjr-ofckv-73-26   Ready    master   7h47m   v1.14.2

6 添加Node节点

(1)执行master节点init的时候输出的kubeadm join,即

kubeadm join 10.255.73.26:6443 --token xfnfrl.4zlyx5ecu4t7n9ie \
    --discovery-token-ca-cert-hash sha256:c68bbf21a21439f8de92124337b4af04020f3332363e28522339933db813cc4b

如果在部署master节点的时候没有保存, 则可以通过kubeadm token list找回, ip即为master节点所在机器的ip, 端口为6443(可能默认是)

(2) 校验节点状态 kubectl get nodes

所有的节点皆为Ready状态表示集群正常

pjr-ofckv-73-24   Ready       47m     v1.14.2
pjr-ofckv-73-25   Ready       7h34m   v1.14.2
pjr-ofckv-73-26   Ready    master   7h55m   v1.14.2

(3) 查看所有Pod状态 kubectl get pods --all-namespaces -o wide

所有的的组件的READY皆为1/1

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE              NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-2hwr4                   1/1     Running   0          7h55m   10.244.0.3     pjr-ofckv-73-26              
kube-system   coredns-fb8b8dccf-nwqt9                   1/1     Running   0          7h55m   10.244.0.2     pjr-ofckv-73-26              
kube-system   etcd-pjr-ofckv-73-26                      1/1     Running   0          7h54m   10.255.73.26   pjr-ofckv-73-26              
kube-system   kube-apiserver-pjr-ofckv-73-26            1/1     Running   0          7h54m   10.255.73.26   pjr-ofckv-73-26              
kube-system   kube-controller-manager-pjr-ofckv-73-26   1/1     Running   0          7h54m   10.255.73.26   pjr-ofckv-73-26              
kube-system   kube-flannel-ds-amd64-9qhcl               1/1     Running   0          48m     10.255.73.24   pjr-ofckv-73-24              
kube-system   kube-flannel-ds-amd64-xmrzz               1/1     Running   0          7h51m   10.255.73.26   pjr-ofckv-73-26              
kube-system   kube-flannel-ds-amd64-zqdzp               1/1     Running   0          7h34m   10.255.73.25   pjr-ofckv-73-25              
kube-system   kube-proxy-kgcxj                          1/1     Running   0          7h34m   10.255.73.25   pjr-ofckv-73-25              
kube-system   kube-proxy-rpn4z                          1/1     Running   0          7h55m   10.255.73.26   pjr-ofckv-73-26              
kube-system   kube-proxy-tm8df                          1/1     Running   0          48m     10.255.73.24   pjr-ofckv-73-24              
kube-system   kube-scheduler-pjr-ofckv-73-26            1/1     Running   0          7h54m   10.255.73.26   pjr-ofckv-73-26              

7.删除节点 [备注: 操作过程中遇到错误, 暂时不确定下面的删除操作是否正确]

(1) 在master节点执行

kubectl drain pjr-ofckv-73-24 --delete-local-data --force --ignore-daemonsets
kubectl delete node pjr-ofckv-73-24

(2)在移除的节点上执行

kubeadm reset

8.在master节点上安装dashbord

dashboard的版本为v1.10.0

(1) 下载镜像 kubernetes-dashboard-amd64:v1.10.0

docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0

(2) 安装dashborad

kubectl create -f kubernetes-dashboard.yaml

kubernetes-dashboard.yaml可以在任意位置新建即可, 具体内容参考: 文件内容

注意点: NodePorthostPath设置, 官方提供的版本为: 官方版本, 可以对比一下不同之处

(3) 参看dashborad的pod是否安正常启动, 如果正常则说明启动成功

kubectl get pods --namespace=kube-system

成功输出内容如下: READY为1/1, STATUS为Running,

NAME                                      READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-595d866bb8-n8bh7     1/1     Running   0          141m

此时如果遇到其他状态如ContainerCreating, 可以通过 kubectl describe pod kubernetes-dashboard-xxxxxxxx-yyyy --namespace=kube-system查看指定的pod的错误原因, 我在安装的时候, 就显示在node节点之上没有kubernetes-dashboard-amd64:v1.10.0这个镜像.

另外,如果将错误修改之后,重新执行kubectl create -f kubernetes-dashboard.yaml会提示文件存在, 可以使用kubectl delete -f kubernetes-dashboard.yaml清除文件.

另外,kubernetes-dashboard.yaml文件中涉及的文件夹/home/share/certs也需要提前创建, 我在master和node节点都新建了

(4) 查看 dashboard的外网暴露端口

kubectl get service --namespace=kube-system

输出如下: 31234即为外网访问接口,在访问dashboard页面时会使用

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
kube-dns               ClusterIP   10.96.0.10               53/UDP,53/TCP,9153/TCP   3d23h
kubernetes-dashboard   NodePort    10.108.134.118           443:31234/TCP            150m

(5) 生成私钥和证书签名
在master节点执行, 如果输入或者选择, 直接回车

openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
rm dashboard.pass.key
openssl req -new -key dashboard.key -out dashboard.csr # 全部回车

(6) 生成SSL证书

openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

然后将生成的dashboard.keydashboard.crt置于路径/home/share/certs下,该路径会配置到下面即将要操作的

(7) 创建dashbord用户

kubectl create -f dashboard-user-role.yaml

dashboard-user-role.yaml文件内容: dashboard-user-role.yaml

(8) 获取登录token, 如果忘了, 可以直接执行下面命令获取

kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system

输出如下:

Name:         admin-token-rfc2l
Namespace:    kube-system
Labels:       
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: 42eeeee9-802c-11e9-a88a-f0000aff491a

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1yZmMybCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjQyZWVlZWU5LTgwMmMtMTFlOS1hODhhLWYwMDAwYWZmNDkxYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.gRK_RO2Nk24tRCLq9ekkWvL_hNOTKKxQB0FrJEAHASGEpNP9Ew9JHBwljA-jPBZiNDxheOURQJuypDvCLXdRqyAWM26QEeYKB8EdHxiZb7fcTazMnPnl7hbBsWOsuTonpD2gWQYaRFFmkJds-ta5UKvtGJiKeUUEAzBilNvRp60mws5L-KAPB0yFAtHWXyz682eVu_NjcEWH-1f_uZ-noXJJPqvz0XarmR1RenQtnMd3brKjhk02FUIQyD2l1s6hH6tHVm59LZ74jLPcXTlaUpEG6LE_vJHzktTsHdRmtKg6wDeq_blvGtT4vU8k92LFC-r2p3O2BJQ-jqfy1y-T6w

(9) 登录

登录地址: https://masterIp:31234(第(4)步输出)/#!/settings?namespace=default

选择令牌, 使用上面得到的token登录

登录页面

管理界面首页

安装问题

  • The connection to the server localhost:8080 was refused - did you specify the right host or port?
    在node节点安装的时候会碰到该问题, 原因是node节点所在服务器缺少文件/etc/kubernetes/admin.conf,
    解决方法:

1.需要将master节点上的/etc/kubernetes/admin.conf文件复制到node节点/etc/kubernetes/admin.conf

2.设置变量

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
  • node节点如果一直都处于not ready状态
    我在第一次安装的时候, 在node节点没有下载镜像, 在执行kubeadm join加入集群时, 确实能够加入集群,但是一直处于UnReady状态,通过tail /var/log/messages参看错误日志,才知道是因为镜像没有安装, 因为安装的时候程序会自动去k8s.gcr.io节点下拉镜像,不幸的是,没有梯梯

参考文档

利用Kubeadm部署 Kubernetes 1.13.1集群实践录

欢迎关注公号:程序员的金融圈

一个探讨技术,金融,赚钱的小圈子,为你提供最有味道的内容,日日更新!

image

你可能感兴趣的:(最新版本k8s集群搭建)