kubernetes

1. kubernetes介绍

文章目录

  • 1. kubernetes介绍
    • 1.2 kubernetes简介
    • 1.3 kubernetes组件
    • 1.4 kubernetes概念
  • 2. Kubernetes快速部署
      • 2.1 安装要求
      • 2.2 开始部署
      • 报错处理

在部署应用程序的方式上,主要经历了三个时代:

  • 传统部署:互联网早期,会直接将应用程序部署在物理机上

    优点:简单,不需要其它技术的参与

    缺点:不能为应用程序定义资源使用边界,很难合理地分配计算资源,而且程序之间容易产生影响

  • 虚拟化部署:可以在一台物理机上运行多个虚拟机,每个虚拟机都是独立的一个环境

    优点:程序环境不会相互产生影响,提供了一定程度的安全性

    缺点:增加了操作系统,浪费了部分资源

  • 容器化部署:与虚拟化类似,但是共享了操作系统

    优点:

    可以保证每个容器拥有自己的文件系统、CPU、内存、进程空间等

    运行应用程序所需要的资源都被容器包装,并和底层基础架构解耦

    容器化的应用程序可以跨云服务商、跨Linux操作系统发行版进行部署

kubernetes_第1张图片

容器化部署方式给带来很多的便利,但是也会出现一些问题,比如说:

  • 一个容器故障停机了,怎么样让另外一个容器立刻启动去替补停机的容器
  • 当并发访问量变大的时候,怎么样做到横向扩展容器数量

这些容器管理的问题统称为容器编排问题,为了解决这些容器编排问题,就产生了一些容器编排的软件:

  • Swarm:Docker自己的容器编排工具
  • Mesos:Apache的一个资源统一管控的工具,需要和Marathon结合使用
  • Kubernetes:Google开源的的容器编排工具

1.2 kubernetes简介

kubernetes_第2张图片

kubernetes,是一个全新的基于容器技术的分布式架构领先方案,是谷歌严格保密十几年的秘密武器----Borg系统的一个开源版本,于2014年9月发布第一个版本,2015年7月发布第一个正式版本。

kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:

  • 自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器
  • 弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整
  • 服务发现:服务可以通过自动发现的形式找到它所依赖的服务
  • 负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡
  • 版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本
  • 存储编排:可以根据容器自身的需求自动创建存储卷

1.3 kubernetes组件

一个kubernetes集群主要是由控制节点(master)、**工作节点(node)**构成,每个节点上都会安装不同的组件。

master:集群的控制平面,负责集群的决策 ( 管理 )

ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制

Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上

ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等

Etcd :负责存储集群中各种资源对象的信息

node:集群的数据平面,负责为容器提供运行环境 ( 干活 )

Kubelet : 负责维护容器的生命周期,即通过控制docker,来创建、更新、销毁容器

KubeProxy : 负责提供集群内部的服务发现和负载均衡

Docker : 负责节点上容器的各种操作

1.4 kubernetes概念

Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控

Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的docker负责容器的运行

Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器

Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等

Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod

Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签

NameSpace:命名空间,用来隔离pod的运行环境

2. Kubernetes快速部署

2.1 安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

-至少2台机器,操作系统 CentOS7+

  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘20GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区
主机名 IP地址 操作系统
k8s-master 192.168.10.101 redhat8
k8s-node1 192.168.10.102 redhat8
k8s-node2 192.168.10.103 Centos8

2.2 开始部署

master操作:

步骤1 关闭防火墙、selinux和swap空间

[root@k8s-master ~]# systemctl disable --now firewalld
[root@k8s-master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# vim /etc/fstab 
#/dev/mapper/rhel-swap			//注释或者删除
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           1790         213        1345           8         231        1424
Swap:             0           0           0

步骤2 在master添加hosts

[root@k8s-master ~]# vim /etc/hosts
.......(省略)
192.168.10.101 k8s-master
192.168.10.102 k8s-node1
192.168.10.103 k8s-node2

步骤3 安装容器运行时

//下载containerd压缩包
[root@k8s-master ~]# wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-1.6.8-linux-amd64.tar.gz

//解压压缩包
[root@k8s-master ~]# tar xzvf cri-containerd-1.6.8-linux-amd64.tar.gz -C /

//创建配置文件目录
[root@k8s-master ~]# mkdir -p /etc/containerd 

//生成默认配置文件
[root@k8s-master ~]# containerd config default > /etc/containerd/config.toml

//由于国内很难访问k8s.gcr.io,因此这里使用阿里云镜像仓库
[root@k8s-master ~]# sed -i 's/k8s.gcr.io/registry.cn-beijing.aliyuncs.com\/abcdocker/' /etc/containerd/config.toml

//配置systemd作为容器的cgroup driver
[root@k8s-master ~]# sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml

//启动conainerd服务
[root@k8s-master ~]# systemctl enable --now containerd

//查看客户端和服务器端版本
[root@k8s-master ~]# ctr version
Client:
  Version:  v1.6.8
  Revision: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
  Go version: go1.17.13

Server:
  Version:  v1.6.8
  Revision: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
  UUID: 1e826a84-05e7-44fd-83f4-f2923c4d9e35

步骤4 添加kubernetes阿里云YUM软件源

[root@k8s-master ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

步骤5 安装kubeadm,kubelet和kubectl

***注意:*node1和node2节点不用安装kubectl,只需安装kubeadm和kubelet

[root@k8s-master ~]# dnf -y install kubelet kubeadm kubectl --disableexcludes=kubernetes

[root@k8s-master ~]# systemctl enable --now kubelet

步骤6 创建Kubernetes集群

//创建模块配置文件,并添加以下两行内容:
[root@k8s-master ~]# vim /etc/modules-load.d/k8s.conf
overlay
br_netfilter

步骤7 使用modprobe命令加载两个模块

[root@k8s-master ~]# modprobe overlay
[root@k8s-master ~]# modprobe br_netfilter

步骤8 将桥接的IPv4流量传递到iptables的链

[root@k8s-master ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1

//应用配置
[root@k8s-master ~]# sysctl --system
......(省略)

步骤9 完成上述步骤后,开始初始化k8s集群

注意,输出的信息中保留最后一段join token,后面为node添加集群时需要用到

如果丢失了,可以在master端执行以下命令重新生成:

kubeadm token create --print-join-command --ttl=0 (–ttl=0代表token永不过期,不加此参数默认24小时过期)

[root@k8s-master ~]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=192.168.0.0/16

......(省略)

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.101:6443 --token u0idqz.bxcjzwnptdxvsxst \
	--discovery-token-ca-cert-hash sha256:677546556cbef398e98cefdff0a92add604a7cfccfe387a4ac31e68a48df990e 
	

步骤10 按顺序执行以下3条命令:

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

步骤11 安装Calico Pod网络插件
内容地址https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml

[root@k8s-master ~]# vi kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
          
          
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

master上查看节点的状态,Ready状态是准备就绪

[root@k8s-master ~]# kubectl get nodes
NAME     	 STATUS   ROLES           AGE    VERSION
k8s-master   Ready    control-plane   33m    v1.26.3

node节点操作:

步骤12 将node节点加入集群

在所有node节点上按照步骤1~5执行,然后执行步骤9中最后输出信息中的提示命令

[root@k8s-node1 ~]# kubeadm join 192.168.10.101:6443 --token u0idqz.bxcjzwnptdxvsxst \
> --discovery-token-ca-cert-hash sha256:677546556cbef398e98cefdff0a92add604a7cfccfe387a4ac31e68a48df990e 

......(省略)

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

//查看所有的名称空间类型
[root@k8s-master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   16m
kube-flannel      Active   3m57s
kube-node-lease   Active   16m
kube-public       Active   16m
kube-system       Active   16m

//查看现有的容器状态
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                 READY   STATUS              RESTARTS   AGE
coredns-c676cc86f-gnkzv              0/1     Pending             0          16m
coredns-c676cc86f-wrjc7              0/1     Pending             0          16m
etcd-k8s-master                      1/1     Running             0          16m
kube-apiserver-k8s-master            1/1     Running             0          16m
kube-controller-manager-k8s-master   1/1     Running             0          16m
kube-proxy-g58zx                     0/1     Running             0          114s
kube-proxy-k475n                     1/1     Running             0          16m
kube-proxy-m478s                     0/1     Running             0          2m1s
kube-scheduler-k8s-master            1/1     Running   

//查看容器详细信息,包括IP地址,在哪个节点运行
[root@k8s-master ~]# kubectl get pods -n kube-system -o wide
NAME                                 READY   STATUS              RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
coredns-c676cc86f-gnkzv              0/1     Pending             0          17m     <none>           <none>       <none>           <none>
coredns-c676cc86f-wrjc7              0/1     Pending             0          17m     <none>           <none>       <none>           <none>
etcd-k8s-master                      1/1     Running             0          17m     192.168.10.101   k8s-master   <none>           <none>
kube-apiserver-k8s-master            1/1     Running             0          17m     192.168.10.101   k8s-master   <none>           <none>
kube-controller-manager-k8s-master   1/1     Running             0          17m     192.168.10.101   k8s-master   <none>           <none>
kube-proxy-g58zx                     0/1     Running             0          2m30s   192.168.10.103   k8s-node2    <none>           <none>
kube-proxy-k475n                     1/1     Running             0          17m     192.168.10.101   k8s-master   <none>           <none>
kube-proxy-m478s                     0/1     Running             0          2m37s   192.168.10.102   k8s-node1    <none>           <none>
kube-scheduler-k8s-master            1/1     Running             0          17m     192.168.10.101   k8s-master   <none>           <none>

报错处理

问题:

添加完node2节点后状态一直是NotReady

[root@k8s-master ~]# kubectl get nodes
NAME     	 STATUS     ROLES           AGE     VERSION
k8s-master   Ready      control-plane   33m     v1.26.3
k8s-node1    Ready      <none>          40m     v1.26.3
k8s-node2    NotReady   <none>          45m     v1.26.3

//查看node2节点详细信息
[root@k8s-master ~]# kubectl describe pod kube-proxy-6sb24 -n kube-system
Name:                 kube-proxy-6sb24
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      kube-proxy
Node:                 k8s-node2/192.168.10.103

......(省略)

Events:
  Type     Reason                  Age                    From     Message
  ----     ------                  ----                   ----     -------
  Warning  FailedCreatePodSandBox  2m17s (x1373 over 5h)  kubelet  (combined from similar events): 
  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: 
  failed to create shim task: OCI runtime create failed: 
  unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9478d22c49ebe22c41bf7be74ffa3dc6a7e1a864f79b0eb36a0f81c270f33d7b/log.json: 
  no such file or directory): runc did not terminate successfully: 
  exit status 127: unknown

解决方法:

[root@k8s-node2 ~]# dnf -y install libseccomp-devel
[root@k8s-master ~]# kubectl get nodes
NAME     	 STATUS   ROLES           AGE     VERSION
k8s-master   Ready    control-plane   46m  	  v1.26.3
k8s-node1    Ready    <none>          53m     v1.26.3
k8s-node2    Ready    <none>          58m     v1.26.3

你可能感兴趣的:(kubernetes,容器,运维)