Kubernetes测试环境集群安装

一. 服务器前期准备

使用vmware创建出三台虚拟机,系统使用CentOS 7.6
192.168.11.138 master
192.168.11.145 node1
192.168.11.146 node2

1. 主机名解析

[root@localhost ~]# vi /etc/hosts

2. 系统初始化准备

由于k8s要求每个节点的时间必须精确一致,所以需要先进行每个节点的时间同步

# 安装时间同步软件
[root@localhost ~]# yum install -y chrony
# 启动时间同步
[root@localhost ~]# systemctl start chronyd
[root@localhost ~]# systemctl enable chronyd
# 关闭防火墙
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
# 修改安全配置selinux
[root@localhost ~]# vi /etc/selinux/config
# 找到并修改参数 SELINUX=disbled

# 关闭swap
[root@localhost ~]# vi /etc/fstab
# 注释掉下面这行
/dev/mapper/centos-swap swap                    swap    defaults        0 0

# 配置网桥
[root@localhost ~]# vi /etc/sysctl.d/kubernetes.conf
# 新建文件kubernetes.conf并加入以下配置
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
# 重新加载配置
[root@localhost ~]# sysctl -p
# 加载网桥过滤模块
[root@localhost ~]# modprobe br_netfilter
# 查看是否加载成功 
[root@localhost ~]# lsmod |grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter

# k8s中的service有两种代理模型,一种基于iptables,一种基于ipvs
# 两者比较,ipvs性能明显高一些,但如果要是用它,需要手动载入ipvs模块
# 安装管理软件
[root@localhost ~]# yum install ipset ipvsadmin -y

# 添加需要加载的模块写入脚本文件,后缀必须是.modules
[root@localhost ~]# cat < /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

# 添加执行权限
[root@localhost ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
# 执行脚本
[root@localhost ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
# 查看是否加载成功
[root@localhost ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4      15053  0
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133095  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack

3. 重启服务器

[root@localhost ~]# reboot
# 检查selinux是否停用
[root@master ~]# getenforce
Disabled
# 检查swap分区是否关闭
[root@master ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           4061         113        3751          11         196        3708
Swap:             0           0           0

二. 安装k8s集群

1. docker

# 切换镜像源
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
# 查看支持的镜像版本
[root@master ~]# yum list docker-ce --showduplicates
# 安装指定版本 --setopt=bosoletes=0 是防止自动下载最新的
[root@master ~]# yum install --setopt=bosoletes=0 docker-ce-18.06.3.ce-3.el7 -y
# 修改配置
[root@master ~]# mkdir /etc/docker
[root@master ~]# cat < /etc/docker/daemon.json
{
 "exec-opts": ["native.cgroupdriver=systemd"],
 "registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
}
EOF
# 重启并开机自启动
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker

2. kubernetes

# 添加镜像源
[root@master ~]# vi /etc/yum.repos.d/kubernetes.repo
# 写入以下内容
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
# 安装
[root@master ~]# yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y
# 修改配置,添加cgroup
[root@master ~]# vi /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
#删除这行KUBELET_EXTRA_ARGS=

# 开机启动,可以先不用启动,因为启动集群的时候会启动
[root@master ~]# systemctl enable kubelet

3. 下载依赖镜像

# 查看镜像版本,1.17.17为目前环境推荐的版本
[root@master ~]# kubeadm config images list
I0804 13:49:19.439685   22165 version.go:251] remote version is much newer: v1.21.3; falling back to: stable-1.17
W0804 13:49:22.247896   22165 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0804 13:49:22.247928   22165 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.17
k8s.gcr.io/kube-controller-manager:v1.17.17
k8s.gcr.io/kube-scheduler:v1.17.17
k8s.gcr.io/kube-proxy:v1.17.17
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

# 由于镜像在国外,没法科学上网的需要修改一下下载地址
[root@master ~]# images=(
 kube-apiserver:v1.17.17
 kube-controller-manager:v1.17.17
 kube-scheduler:v1.17.17
 kube-proxy:v1.17.17
 pause:3.1
 etcd:3.4.3-0
 coredns:1.6.5
)
[root@master ~]# for imageName in ${images[@]} ; do 
  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName 
  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
  docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

4. 初始化集群

master上运行

[root@master ~]# kubeadm init \
 --kubernetes-version=v1.17.17 \
 --pod-network-cidr=10.244.0.0/16 \
 --service-cidr=10.96.0.0/12 \
 --apiserver-advertise-address=192.168.11.138

# 执行成功后能看到以下信息,根据提示执行命令
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.11.138:6443 --token rtdb7l.og3lwhxbqisq0jbi \
    --discovery-token-ca-cert-hash sha256:987755122093887613b08c2b24c8593fb72c5d696459b266822c88c42bc34f28
# 根据提示执行以下命令
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

a. 节点上执行

# 在master提示中找到一下命令在各个节点运行,加入到集群中
kubeadm join 192.168.11.138:6443 --token rtdb7l.og3lwhxbqisq0jbi \
    --discovery-token-ca-cert-hash sha256:987755122093887613b08c2b24c8593fb72c5d696459b266822c88c42bc34f28

b. 在master上查看集群状态

# 查看当前有哪些节点运行,由于网络组件还没有安装,所以可以看到状态都是NotReady
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE    VERSION
master   NotReady   master   12m    v1.17.4
node1    NotReady      2m6s   v1.17.4
node2    NotReady      7s     v1.17.4

c. 网络组件安装

获取flannel配置文件
下载地址:https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
由于里面配置的镜像是国外的,如果没有科学上网的话,需要把镜像地址quay.io替换成quay-mirror.qiniu.com
把yml文件上传至master节点目录/home/anson,然后执行以下命令

[root@master ~]# kubectl apply -f /home/anson/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
# 执行成功后,各节点状态变成Ready
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   56m   v1.17.4
node1    Ready       45m   v1.17.4
node2    Ready       43m   v1.17.4

三. 测试k8s

用k8s创建一个nginx,并使用浏览器访问

# 拉镜像
[root@master ~]# kubectl create deployment nginx --image=nginx:1.14-alpine
deployment.apps/nginx created
## 暴露80端口
kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
# 查看pod
[root@master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6867cdf567-kshn8   1/1     Running   0          93s
# 查看service
[root@master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1              443/TCP        64m
nginx        NodePort    10.109.61.48           80:31091/TCP   64s

根据上面展示的信息,根据80:31091/TCP可以了解到,k8s网络内80端口对接31091端口,此时可以使用浏览器输入地址{IP}:{PORT}进行访问。

image.png

Dashboard安装

1. 应用官方的配置文件

[root@master ~]# kubectl apply -f http://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

执行后报错,拒绝连接。原因应该是dns污染的问题,设置直连就可以了。


image.png
[root@master ~]# vi /etc/hosts
# 加入一下配置
199.232.68.133 raw.githubusercontent.com
# 再次执行就可以了
[root@master ~]# kubectl apply -f http://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

执行kubectl get pods -A能看到多了两个pod

kubernetes-dashboard   dashboard-metrics-scraper-894c58c65-8nd5g   1/1     Running   0          16m
kubernetes-dashboard   kubernetes-dashboard-555f8cc76f-kpsd4       1/1     Running   0          16m

2. 创建管理员

[root@master home]# mkdir dashboard && cd dashboard
[root@master dashboard]# vi dashboard-admin.yaml

yml文件内容如下

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

执行命令,服务账号被创建且赋予权限

[root@master dashboard]# kubectl apply -f dashboard-admin.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

通过一下命令获取管理员token令牌

[root@master dashboard]# kubectl get secret -n kubernetes-dashboard $(kubectl get serviceaccount admin-user -n kubernetes-dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6InE5ZVNYV1ROdzNhcHcxaEhmQWx4WFBmR3U5dEFPY2Nmckh4OUNJVi13OFUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXZ0Y3RrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2ZWU5YzJhMS1kNTE4LTQ2NmEtOWY4Ni1iZDlhZmM4NzZhZjYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.dHQ44tyzpBmnvJ6mEB5fV3GmGKy--WP0QyFegTv7oO2fIaVTySqebmSfY1JRnLAQnJeYAzT5gbVhWW5iZ7m71Lq3q-YUof8F18MSG_CfMd71YGqFmMsvqlSsXK4BgdyRWDT8Zz1eMacebabGGxqg0Y4wt_lkUZy8s0xXSUuAKPS3dNZNxnF61ow2By5ve2iLhNg67ncVqy0tBHb9wdDsV98j_vMmxM2uDt_ui_Oevf2n8H3i2T_TBTtxR8diYBEYcWL_Gh5vcynk9nF4N2aF62MYoYUaF2I4vk2CgYnFe-Qi18pdtQshhHy0x276ShNLhrbDnY1rypTe3uFm06nHvQ

3. 开启代理访问dashboard

[root@master dashboard]# kubectl proxy
Starting to serve on 127.0.0.1:8001

4. 踩坑

执行完上一个步骤后,理论上应该是浏览器直接访问{IP:port}既可以打开dashboard的登录界面,但死活打不开,显示Forbidden。
使用kubectl get pod -A也是显示pod正常启动,搞了好久终于让我找到解决方案。

a. 修改kubernetes-dashboard配置文件

[root@master dashboard]# kubectl edit svc -n kubernetes-dashboard

找到namespace叫kubernetes-dashboard的部分,把type: ClusterIP替换成type: NodePort
运行该命令就可以看到dashboard对外访问的端口是30310

[root@master dashboard]# kubectl get svc -A
NAMESPACE              NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.96.0.1              443/TCP                  23h
default                nginx                       NodePort    10.109.61.48           80:31091/TCP             22h
kube-system            kube-dns                    ClusterIP   10.96.0.10             53/UDP,53/TCP,9153/TCP   23h
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.98.97.30            8000/TCP                 4h6m
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.99.81.212           443:30310/TCP            4h6m

此刻浏览器打开https://192.168.11.138:30310,还是无法访问,主要是因为证书的问题。

b. 生成新证书

# 查看现有证书,找到kubernetes-dashboard-certs
[root@master dashboard]# kubectl get secret -A
NAMESPACE              NAME                                             TYPE                                  DATA   AGE
default                default-token-g6c27                              
.
.
.
kubernetes-dashboard   kubernetes-dashboard-certs                       Opaque                                0      4h11m
kubernetes-dashboard   kubernetes-dashboard-csrf                        Opaque                                1      4h11m
kubernetes-dashboard   kubernetes-dashboard-key-holder                  Opaque                                2      4h11m
kubernetes-dashboard   kubernetes-dashboard-token-4wmkq                 kubernetes.io/service-account-token   3      4h11m

# 删除命名空间kubernetes-dashboard下的kubernetes-dashboard-certs证书
[root@master dashboard]# kubectl delete secret -n kubernetes-dashboard kubernetes-dashboard-certs
secret "kubernetes-dashboard-certs" deleted

# 创建证书
[root@master dashboard]#  kubectl create secret generic kubernetes-dashboard-certs --from-file=/etc/kubernetes/pki -n kubernetes-dashboard                            secret/kubernetes-dashboard-certs created

c. 重启pod

删除pod后,k8s会自动重启,因此执行以下命令即可

[root@master dashboard]# kubectl delete pod -n kubernetes-dashboard --all
pod "dashboard-metrics-scraper-894c58c65-8nd5g" deleted
pod "kubernetes-dashboard-555f8cc76f-kpsd4" deleted
# 参看pod重启情况,当前第一个重启完毕,第二个正在重启中
[root@master dashboard]# kubectl get pod -A
NAMESPACE              NAME                                        READY   STATUS              RESTARTS   AGE
default                nginx-6867cdf567-kshn8                      1/1     
.
.
.
kubernetes-dashboard   dashboard-metrics-scraper-894c58c65-4xl9g   1/1     Running             0          26s
kubernetes-dashboard   kubernetes-dashboard-555f8cc76f-t7l56       0/1     ContainerCreating   0          26s

d. 启动代理

[root@master dashboard]# kubectl proxy --address='0.0.0.0' --accept-hosts='^\*$' &

浏览器打开地址便可看到以下界面,我们使用token登录,填入第2步获取的管理员令牌就可以登录了


image.png

image.png

ingress安装

1. 应用官方的配置文件

[root@master ingress-controller]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml                                                  namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created

[root@master ingress-controller]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml
service/ingress-nginx created

# 查看pod和服务是否起来
[root@master ingress-controller]# kubectl get pod,svc -n ingress-nginx
NAME                                            READY   STATUS              RESTARTS   AGE
pod/nginx-ingress-controller-7f74f657bd-s9dqk   0/1     ContainerCreating   0          4m8s

NAME                    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx   NodePort   10.103.251.197           80:31492/TCP,443:32075/TCP   52s

你可能感兴趣的:(Kubernetes测试环境集群安装)