安装k8s、docker

文中遇到的问题解决方案并一定适用每个人,我也是摸索中慢慢解决的

一、准备工作

环境:
3台 centos7
运行内存4G
内核 4.4以上
永久打开集群所需端口 ,举例

firewall-cmd --zone=public --add-port=6443/tcp --permanent

Master节点


image.png

Worker节点


image.png

所有节点关闭防火墙,命令

systemctl disable firewalld.service 
systemctl stop firewalld.service

所有节点关闭swap

vi /etc/fstab
注释掉swap那一行

禁用SELINUX

setenforce 0
vi /etc/selinux/config
SELINUX=disabled

设置iptables

vi /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1

设置所有节点主机名

hostnamectl --static set-hostname  k8s-master
hostnamectl --static set-hostname  k8s-worker-1
hostnamectl --static set-hostname  k8s-worker-2

所有节点 主机名/IP加入 hosts解析

192.168.233.3 k8s-master
192.168.233.4 k8s-worker-1
192.168.233.5 k8s-worker-2

时间同步

@root# yum install -y ntpdate
@root# crontab -e
0-59/10 * * * * /usr/sbin/ntpdate us.pool.ntp.org | logger -t NTP

@root# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
@root# ntpdate us.pool.ntp.org
二、安装docker

环境准备

yum install -y yum-utils device-mapper-persistent-data lvm2

安装docker源(社区版,免费)

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

正式安装docker

yum -y install docker-ce

新建daemon.conf

vi /etc/docker/daemon.json

配置国内镜像

{  
"registry-mirrors": ["https://registry.docker-cn.com"]
}

设置开机自启动

systemctl enable docker

启动docker

systemctl start docker

运行测试docker

docker run hello-world

如果遇到问题Unable to find image 'hello-world:latest' locally,不必紧张,再重新执行一遍命令就能拉取成功了(前提是配置了国内镜像)
打开iptable的桥接相关功能

cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
cat /proc/sys/net/bridge/bridge-nf-call-iptables

docker服务重启

sudo systemctl daemon-reload && sudo systemctl restart docker
三、为安装k8s做准备

配置镜像

vi /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

配置k8s.conf,修改网络配置

vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

修改完成后执行sysctl --system应用

四、安装kubelet、kubeadm、kubectl

使用命令安装

指定版本安装
yum -y install kubeadm-1.13.0 kubelet-1.13.0 kubectl-1.13.0 kubernetes-cni-0.6.0 --disableexcludes=kubernetes

最新版本安装
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

配置开机自启动

systemctl enable kubelet.service

验证MAC地址是否唯一

cat /sys/class/net/ens160/address
cat /sys/class/dmi/id/product_uuid

br_netfilter模块加载

cat > /etc/rc.sysinit
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF

cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
chmod 755 /etc/sysconfig/modules/br_netfilter.modules

初始化kubrenetes---- kubeadm init

初始化Kubernetes,执行完命令以后,会自动去镜像仓库下载所需的组件,如api-server,etcd等,如果下载失败则通过方法二去手动下载,并重新打tag
如果kubeadm init失败需要重新初始化,那在开始之前需要执行kubeadm reset重置环境

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.0 --apiserver-advertise-address master节点ip地址 --pod-network-cidr=10.244.0.0/16 --token-ttl 0

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version 1.21.2 --apiserver-advertise-address 192.168.32.137 --pod-network-cidr=10.244.0.0/16 --token-ttl 0

kubeadm init --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version=1.21.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

如果出现下图的问题


image.png

解决方案:
参照网上的一些资料,修改了vi /etc/yum.repos.d/kubernetes.repo和docker.service,再重新换了init 命令后面的参数,执行步骤(1)(2)

#vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

(1)对docker.service新增Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16",必须在Type=notify后面

# vi /usr/lib/systemd/system/docker.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16"
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

(2)取消apiserver-advertise-address参数

kubeadm init --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version=1.21.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

成功以后显示如下,然后按照提示执行命令,以及建加入节点的命令复制保存下来,每个人join的token是不一样的,已自己的返回结果为准

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.36.137:6443 --token x3n0p9.b30auojouc1dpkqt --discovery-token-ca-cert-hash sha256:810775682bb60963f44c099bd3bbeb9124f1b897257cc856495508d6ed2cee22
备案:如果上诉方法下载镜像失败,则通过Docker手动下载其他所需组件

(1)先使用kubeadm config images list 查看需要的镜像版本

[root@k8s-master k8s] kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
k8s.gcr.io/kube-scheduler:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

(2)拉取,如果已经配置了docker镜像,则可将docker.io换成k8s.gcr.io

docker pull k8s.gcr.io/kube-apiserver-arm64:v1.21.2
docker pull k8s.gcr.io/kube-controller-manager-arm64:v1.21.2
docker pull k8s.gcr.io/kube-scheduler-arm64:v1.21.2
docker pull k8s.gcr.io/kube-proxy-arm64:v1.21.2
docker pull k8s.gcr.io/pause-arm3.4.1
docker pull k8s.gcr.io/etcd-arm64:3.4.13-0
docker pull k8s.gcr.io/coredns/coredns:v1.8.0

如果上速方法也失败,则手动输入阿里云代理仓库地址registry.aliyuncs.com/google_containers,下载以后重新用docker tag给组件镜像重新打tag命
(1)手动下载镜像

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2
docker pull  registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.2
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.2
docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.0

(2)docker images查看镜像组件


image.png

(3)重新打tag

docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2  k8s.gcr.io/kube-apiserver:v1.21.2

(4)删除老镜像

docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2
按照提示执行命令,为普通用户添加 kubectl 运行权限
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
重新生成token和ca证书的hash值

查看token是否失效

kubeadm token list

如果失效,则生成新token,token生命周期为一天

kubeadm token create

重新获取到的 ca 证书sha256编码hash值,我的 ca 证书存放在/etc/kubernetes/pki目录下,这个证书在init的时候会自动生成

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

Node节点加入到集群

将node节点加入到集群,复制init返回的join命令到node机器上执行即可(不要改变原命令的ip地址,我就是因为改了ip地址导致无法加入到集群) ,如果join期间卡住了,则在命令后面加上 --v=5查看相关报错日志

kubeadm join 192.168.36.137:6443 --token x3n0p9.b30auojouc1dpkqt --discovery-token-ca-cert-hash sha256:810775682bb60963f44c099bd3bbeb9124f1b897257cc856495508d6ed2cee22

如果node节点报8080 refuse,则执行

#scp master:/etc/kubernetes/admin.conf root@node:/home
#mv /home/admin.conf /etc/kubenetes
#echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

再次执行join命令,提示一下信息则成功

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
安装flannel
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ym
# kubectl apply -f kube-flannel.yml
获取节点信息 kubectl get nodes
[root@k8s-master ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master     Ready      control-plane,master   18h     v1.21.2
k8s-worker-1   NotReady                    7m32s   v1.21.2
k8s-worker-2   NotReady                    105s    v1.21.2
Master NotReday

检查是否安装好网络插件flannel

Node NotReady

如果节点正常,则status 为running状态

 kubectl get pods -n kube-system -o wide
image.png
执行journalctl -f -u kubelet
"Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"

检查下面的文件是否建立,如果没有则执行mkdir新建

vi /etc/cni/net.d/10-flannel.conf
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

vi  /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

最后再执行

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

再去master检查节点信息,全部变为Ready状态

[root@k8s-master share]# kubectl get nodes
NAME           STATUS   ROLES                  AGE     VERSION
k8s-master     Ready    control-plane,master   23h     v1.21.2
k8s-worker-1   Ready                     4h36m   v1.21.2
k8s-worker-2   Ready                     4h30m   v1.21.2

kubeadm init初始化后面参数的意思如下

# 初始化 Control-plane/Master 节点
kubeadm init \
    --apiserver-advertise-address 0.0.0.0 \
    # API 服务器所公布的其正在监听的 IP 地址,指定“0.0.0.0”以使用默认网络接口的地址
    # 切记只可以是内网IP,不能是外网IP,如果有多网卡,可以使用此选项指定某个网卡
    --apiserver-bind-port 6443 \
    # API 服务器绑定的端口,默认 6443
    --cert-dir /etc/kubernetes/pki \
    # 保存和存储证书的路径,默认值:"/etc/kubernetes/pki"
    --control-plane-endpoint kuber4s.api \
    # 为控制平面指定一个稳定的 IP 地址或 DNS 名称,
    # 这里指定的 kuber4s.api 已经在 /etc/hosts 配置解析为本机IP
    --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
    # 选择用于拉取Control-plane的镜像的容器仓库,默认值:"k8s.gcr.io"
    # 因 Google被墙,这里选择国内仓库
    --kubernetes-version 1.17.3 \
    # 为Control-plane选择一个特定的 Kubernetes 版本, 默认值:"stable-1"
    --node-name master01 \
    #  指定节点的名称,不指定的话为主机hostname,默认可以不指定
    --pod-network-cidr 10.10.0.0/16 \
    # 指定pod的IP地址范围
    --service-cidr 10.20.0.0/16 \
    # 指定Service的VIP地址范围
    --service-dns-domain cluster.local \
    # 为Service另外指定域名,默认"cluster.local"
    --upload-certs
    # 将 Control-plane 证书上传到 kubeadm-certs Secret
准备cfssl证书生成工具(这一步暂时不要参考我的,我没弄完)
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

自签CA,生成json文件

cat  ca-config.json 
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
cat  ca-csr.json 
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}

你可能感兴趣的:(安装k8s、docker)