K8s集群安装

kubernetets集群安装

因为电脑资源问题我只做了一主一从,master配置为:2C4G,node配置为:1C2G,建议不低于此配置进行分配。

IP规划

#准备两台台虚拟机,ip如下:
k8s-master01 192.168.147.10
k8s-node01  192.168.147.11

修改主机名(三台)

hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-node01

各节点配置主机名解析

vim /etc/hosts
 192.168.10.10 k8s-master01
 192.168.10.11 k8s-node01
#拷贝到其他机器
scp /etc/hosts [email protected]:/etc/

各节点升级内核为4.44(可以不是4.44,不作为必要条件)

Centos 7.x系统自带内核存在一些bugs,导致docker、kubernetes不稳定,如:http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
#安装检查/boot/grub2/grub.cfg中对应内核menuentry中包含initrd16配置,如果没有,在安装一边
yum --enablerepo=elrepo-kernel install -y kernel-lt
#设置开机从新内核启动
grub2-set-default 'Centos Linux (4.4.208-1.el7.elrepo.x86_64)'

设置ntp时间同步(调整时区)

#设置系统时区为中国/上海
timedatectl set-timezone Asia/Shanghai
#将当前的UTC时间写入硬时钟
timedatectl  set-local-rct 0
#重启时间服务
systemctl restart reyslog
systemctl restart crond

关闭防火墙(各节点执行)

 systemctl stop firewalld &&systemctl enable firewalld

设置iptables,清空规则(各节点执行)

yum install -y iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

关闭Selinux以及Swap分区(各节点执行)

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config 
swapoff  -a && sed -i '/ swap / s/^\(.*\)$/#\1/g'  /etc/fstab

各节点调整内核参数,创建kubernetes.conf添加如下内容(各节点执行)

[root@k8s-master01 ~]# vim /etc/sysctl.d/k8s.conf
#注释为非必要参数可以不设置
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
#net.ipv4.tcp_tw_recycle=0
#vm.swappiness=0 #禁止使用swap空间,只有当系统00M时才使用它
#vm.overcommit_memory=1 #不检查物理内存是否够用
#vm.panic_on_oom=0 ##开启00M
#fs.inotify.max_user_instance=8192
#fs.inotify.max_user_watches=1048576
#fs.file-max=52706963
#fs.nr_open=52706963
#net.ipv6.conf.all.disable_ipv6=1
#net.netfilter.nf_conntrack_max=2310720

#执行命令使修改生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

kube-proxy开启ipvs的前置条件(各节点执行)

#开启模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

设置rsyslog和systemd journald(非必须步骤)

mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat  > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[journal]
#持久化保存磁盘
Storage=persistent

#压缩历史日志
Compress=yes

SycIntervalSec=5m
ReteLimitInterval=30s
ReteLimtBurs=1000

#最大占用空间10G
SystemMaxUse=10G

#单个日志文件最大200M
SystemMaxFileSec=200M

#日志保存两周
MaxRetentionSec=2week

#不将日志转发syslog
ForwardToSyslog=no
EOF

各节点安装基础依赖包

yum install  -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

安装docker

#安装依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
#配置阿里云的docker-ce仓库
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#安装docker1809
yum install -y --setopt=obsoletes=0  docker-ce-18.09.7-3.el7 &&
systemctl start docker  && systemctl enable docker

配置daemon,修改docker cgroup driver为systemd

#创建或修改/etc/docker/daemon.json:
cat > /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d
#重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
#检查Cgroup
docker info | grep Cgroup

安装kubeadm(各节点配置)

#配置kubernetes仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#各节点安装kubeadm kubectl kubelet
yum install  -y kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1

#kubelet设置开机自启
systemctl enable kubelet

关闭Swap分区,/etc/sysctl.d/k8s.conf添加如下内容

vm.swappiness=0

修改/etc/sysconfig/kubelet,加入如下:

KUBELET_EXTRA_ARGS=--fail-swap-on=false

初始化主节点

[root@k8s-master01 ~]# kubeadm config print init-defaults > k8s.yaml

#修改如下
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.147.10
  bindPort: 6443
nodeRegistration:
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master

---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.1

networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
k8s.yaml整体配置文件如下:
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
#修改如下本地IP
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.147.10
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {
     }
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
#修改networking
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {
     }
#加入如下
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
初始化操作(主节点)

​ 此步骤需要科学上网,默认从国外拉取镜像,也可以自行拉取,然后修改tag,各个节点都要获取镜像。

#查看k8s所需镜像
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1
k8s.gcr.io/kube-proxy:v1.15.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
  
#初始化主节点
kubeadm init --config k8s.yaml --ignore-preflight-errors=Swap
初始化信息输出结果
Flag --experimental-upload-certs has been deprecated, use --upload-certs instead
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.147.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.147.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.147.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.505473 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
2f58f33136ee33f08c13e4c7c4dc9b2306c7a610393598d63a17e650d17ef394
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.147.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:b19010fc584d5aa981c6ddfe928365b2b735b880bba23e8d148181864e8aa518
初始化信息简单介绍:
  • [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
  • [certs]生成相关的各种证书
  • [kubeconfig]生成相关的kubeconfig文件
  • [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod
  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
根据初始化信息提示操作
#下面的命令是配置常规用户如何使用kubectl访问集群:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
集群初始化如果遇到问题,可以使用下面的命令进行清理:
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

安装Pod Network网络

#整理生成的文件
mkdir -p  install-k8s/core
mv kubeadm-config.yaml kubeadm-init.log install-k8s/core
mkdir -p install-k8s/plugin/flannel

#下载flannel文件
cd install-k8s/plugin/flannel && wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

#安装flannel network add-on
kubectl create -f kube-flannel.yml

###这里注意kube-flannel.yml这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64
kube-flannel.yml文件内容如下
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:ku
            name: kube-flannel-cfg
查看flannel以及各个节点状态:
#查看一下集群状态,确认个组件都处于healthy状态
[root@k8s-master01 ~]# kubectl get cs
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   Ready   master   7d21h   v1.15.1

[root@k8s-master01 ~]# kubectl get node
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {
     "health":"true"} 

#使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态
[root@k8s-master01 flannel]# kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-frtwp               1/1     Running   0          7d22h
coredns-5c98db65d4-lfjhk               1/1     Running   0          7d22h
etcd-k8s-master01                      1/1     Running   1          7d22h
kube-apiserver-k8s-master01            1/1     Running   1          7d22h
kube-controller-manager-k8s-master01   1/1     Running   1          7d22h
kube-flannel-ds-amd64-vkpkl            1/1     Running   0          8m4s
kube-proxy-tw7gg                       1/1     Running   1          7d22h
kube-scheduler-k8s-master01            1/1     Running   1          7d22h

测试集群DNS是否可用

kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
If you don't see a command prompt, try pressing enter.

#进入后执行nslookup kubernetes.default确认解析正常:
nslookup kubernetes.default

向集群中添加node节点(node节点执行)

kubeadm join 192.168.147.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:793eddfe0e85419c7419bd31c70976973af4c9a6a5196500965b4f99b3e859e5 
确定已经加入加群
#检查加入的集群
[root@k8s-master01 flannel]# kubectl get node
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   Ready      master   10m     v1.15.1
k8s-node01     NotReady   <none>   3m48s   v1.15.1
k8s-node02     NotReady   <none>   3m41s   v1.15.1

#使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态,等待为Running
[root@k8s-master01 flannel]# kubectl get pod -n kube-system -o wide
NAME                                   READY   STATUS                  RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
coredns-5c98db65d4-d967f               1/1     Running                 0          14m     10.244.0.3       k8s-master01   <none>           <none>
coredns-5c98db65d4-mbn25               1/1     Running                 0          14m     10.244.0.2       k8s-master01   <none>           <none>
etcd-k8s-master01                      1/1     Running                 0          12m     192.168.147.10   k8s-master01   <none>           <none>
kube-apiserver-k8s-master01            1/1     Running                 0          13m     192.168.147.10   k8s-master01   <none>           <none>
kube-controller-manager-k8s-master01   1/1     Running                 0          13m     192.168.147.10   k8s-master01   <none>           <none>
kube-flannel-ds-amd64-bxtdw            1/1     Running                 0          9m47s   192.168.147.10   k8s-master01   <none>           <none>
kube-flannel-ds-amd64-dzjkb            0/1     Init:ImagePullBackOff   0          7m51s   192.168.147.12   k8s-node02     <none>           <none>
kube-flannel-ds-amd64-gkfjb            0/1     Init:0/1                0          7m58s   192.168.147.11   k8s-node01     <none>           <none>
kube-proxy-schq4                       1/1     Running                 0          7m58s   192.168.147.11   k8s-node01     <none>           <none>
kube-proxy-tnpj6                       1/1     Running                 0          7m51s   192.168.147.12   k8s-node02     <none>           <none>
kube-proxy-zjz54                       1/1     Running                 0          14m     192.168.147.10   k8s-master01   <none>           <none>
kube-scheduler-k8s-master01            1/1     Running                 0          13m     192.168.147.10   k8s-master01   <none>           <none>

#最后确保STATUS为Ready
[root@k8s-master01 flannel]# kubectl get node
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   21m   v1.15.1
k8s-node01     Ready    <none>   15m   v1.15.1

kube-proxy开启ipvs

1.#主节点执行,修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”
kubectl edit cm kube-proxy -n kube-system

2.#之后重启各个节点上的kube-proxy pod
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

3.#查看状态
kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-7fsrg                1/1     Running   0          3s
kube-proxy-k8vhm                1/1     Running   0          9s

4.#日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。
kubectl logs kube-proxy-7fsrg  -n kube-system
I0312 02:51:53.116592       1 server_others.go:143] Using iptables Proxier.
I0312 02:51:53.119015       1 server.go:534] Version: v1.15.1
I0312 02:51:53.136963       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0312 02:51:53.139824       1 config.go:96] Starting endpoints config controller
I0312 02:51:53.140029       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config c
I0312 02:51:53.141018       1 config.go:187] Starting service config controller
I0312 02:51:53.141119       1 controller_utils.go:1029] Waiting for caches to sync for service config con
I0312 02:51:53.241097       1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0312 02:51:53.241544       1 controller_utils.go:1036] Caches are synced for service config controlle

如何从集群中移除Node

#在master节点执行
kubectl drain K8S-node01 --delete-local-data --force --ignore-daemonsets
kubectl delete node K8S-node01

#在node1上执行
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel
rm -rf /var/lib/cni/

#在master01上执行
kubectl delete node K8S-node01

重新使node加入集群

#默认情况下,token的有效期是24小时,如果token已经过期的话,可以使用以下命令重新生成:
[root@k8s-master01 k8s]# kubeadm token create
oewjls.kbz3btorladih7us

#如果你找不到–discovery-token-ca-cert-hash的值,可以使用以下命令生成:
[root@master] ~$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
71c2e931404612cf052d9e4b588fd033f9b3190424f17a1725f892959c9b0929

#node节点加入
kubeadm join 192.168.10.10:6443 --token oewjls.kbz3btorladih7us --discovery-token-ca-cert-hash sha256:71c2e931404612cf052d9e4b588fd033f9b3190424f17a1725f892959c9b0929

到此为止简单k8s集群就已经安装完毕了,现在已经完全可以用了

报错总结

1.#如果初始化后coredns一直重启状态,那么有可能是iptables规则乱了
##报错如下:
kubectl logs -f coredns-5c98db65d4-dq76z -n kube-system
E0402 02:51:05.199703       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
E0402 02:51:05.199703       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-5c98db65d4-dq76z.unknownuser.log.ERROR.20200402-025105.1: no such file or directory
#解决方法:
iptables -F
iptables -Z
systemctl restart kubelet
systemctl restart docker

你可能感兴趣的:(kubernetes,docker,linux,etcd)