k8s安装部署(最新验证,手把手教学)

k8s部署文档

此文档为k8s部署总结,用以记录半个月k8s学习及部署过程的经验和踩坑梳理

  1. 安装要求,在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

    • 一台或多台机器,操作系统 CentOS7.x-86_x64(centos8.x已验证无问题)

    • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多

    • 集群中所有机器之间网络互通(内网通信,内网设置安全组或防火墙即可)

    • 可以访问外网,需要拉取镜像

    • 禁止swap分区

    • 以下操作需要在所有节点执行:

      关闭防火墙:
      $ systemctl stop firewalld
      $ systemctl disable firewalld
      
      关闭selinux:
      $ sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
      $ setenforce 0  # 临时
      
      关闭swap:
      $ swapoff -a  # 临时
      $ vim /etc/fstab  # 永久
      
      设置主机名:
      $ hostnamectl set-hostname <hostname>
      
      在master添加hosts:(内网ip)
      $ cat >> /etc/hosts << EOF
      192.168.31.61 k8s-master
      192.168.31.62 k8s-node1
      192.168.31.63 k8s-node2
      EOF
      
      将桥接的IPv4流量传递到iptables的链:
      $ cat > /etc/sysctl.d/k8s.conf << EOF
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      EOF
      $ sysctl --system  # 生效
      
      时间同步:(猜想在多个物理机不在同一个时区下才需要用到)
      $ yum install ntpdate -y
      $ ntpdate time.windows.com
      
  2. 准备环境

    角色 IP
    k8s-master 192.168.31.61
    k8s-node1 192.168.31.62
    k8s-node2 192.168.31.63
  3. 所有节点安装docker,Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker

    $ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    $ yum -y install docker-ce
    $ systemctl enable docker && systemctl start docker
    
    #配置镜像下载加速器
    #registry-mirrors:修改镜像源
    #exec-opts:修改docker cgroup(k8s 1.2版本以上需要修改)
    cat > /etc/docker/daemon.json << EOF
    {
      "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],		
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    EOF
    
    $ systemctl restart docker
    
    
  4. 所有节点安装kubeadm,kubelet和kubectl

    #添加阿里云yum软件源
    $ cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    #安装k8s(注意版本号,后面的版本需要对应)
    $ yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
    $ systemctl enable kubelet
    
  5. 在master节点执行kubeadm初始化

      $ kubeadm init \
          --apiserver-advertise-address=192.168.31.61 \
          --image-repository registry.aliyuncs.com/google_containers \
          --kubernetes-version v1.21.0 \
          --service-cidr=10.96.0.0/12 \
          --pod-network-cidr=10.244.0.0/16 \
          --ignore-preflight-errors=all
          
    #-–apiserver-advertise-address 集群通告地址(master内网)
    #–-image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
    #–-kubernetes-version K8s版本,与上面安装的一致
    #–-service-cidr 集群内部虚拟网络,Pod统一访问入口
    #-–pod-network-cidr Pod网络,与下面部署的CNI网络组件yaml中保持一致
    

    或者使用配置文件引导初始化:

    $ vi kubeadm.conf
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: v1.21.1
    imageRepository: registry.aliyuncs.com/google_containers 
    networking:
      podSubnet: 10.244.0.0/16 
      serviceSubnet: 10.96.0.0/12 
    
    $ kubeadm init --config kubeadm.conf --ignore-preflight-errors=all  
    
  6. node节点加入k8s集群

    #可以使用kubeadm init输出的kubeadm join命令:
    $ kubeadm join 192.168.31.61:6443 --token esce21.q6hetwm8si29qxwn \
        --discovery-token-ca-cert-hash sha256:00603a05805807501d7181c3d60b478788408cfe6cedefedb1f97569708be9c5
    

    token有效期为24小时,如果过期,需要重新创建token:

    $ kubeadm token create --print-join-command  #快捷生成方式
    

    验证:

    $ kubectl get nodes
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    

    拷贝kubectl使用的连接k8s认证文件到默认路径:

    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
    $ kubectl get nodes
    NAME         STATUS     ROLES                  AGE     VERSION
    k8s-master   NotReady   control-plane,master   2m25s   v1.21.0
    k8s-node1    NotReady   <none>                 112s    v1.21.0
    k8s-node2    NotReady   <none>                 112s    v1.21.0
    
  7. master节点部署容器网络(CNI),此处只介绍calico部署

    Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。

    Calico 在每一个计算节点利用 Linux Kernel 实现了一个高效的虚拟路由器( vRouter) 来负责数据转发,而每个 vRouter 通过 BGP 协议负责把自己上运行的 workload 的路由信息向整个 Calico 网络内传播。

    此外,Calico 项目还实现了 Kubernetes 网络策略,提供ACL功能。

    $ wget https://docs.projectcalico.org/manifests/calico.yaml
    
    #将yaml中的apiVersion: policy/v1beta1修改为apiVersion: policy/v1
    #如果下载缓慢,可以使用保存的calico.yaml直接部署
    
    $ kubectl apply -f calico.yaml
    $ kubectl get nodes
    
    NAME         STATUS   ROLES                  AGE     VERSION
    k8s-master   Ready    control-plane,master   5m52s   v1.21.0
    k8s-node1    Ready    <none>                 5m19s   v1.21.0
    k8s-node2    Ready    <none>                 5m19s   v1.21.0
    
    $ kubectl get pods -n kube-system
    
    NAME                                       READY   STATUS         RESTARTS   AGE
    calico-kube-controllers-74b8fbdb46-rfdx5   1/1     Running        0          106s
    calico-node-hpcf6                          1/1     Running        0          107s
    calico-node-l4c2s                          1/1     Running        0          107s
    calico-node-r2jlk                          1/1     Running        0          107s
    coredns-545d6fc579-27tml                   0/1     ErrImagePull   0          5m43s
    coredns-545d6fc579-r96ck                   0/1     ErrImagePull   0          5m43s
    etcd-k8s-master                            1/1     Running        0          5m52s
    kube-apiserver-k8s-master                  1/1     Running        0          5m52s
    kube-controller-manager-k8s-master         1/1     Running        0          5m52s
    kube-proxy-8g72k                           1/1     Running        0          5m30s
    kube-proxy-frm4q                           1/1     Running        0          5m30s
    kube-proxy-pbm2b                           1/1     Running        0          5m44s
    kube-scheduler-k8s-master                  1/1     Running        0          5m52s
    
    
    #此处发现corecdn拉取失败,需要手动拉取(master与node都需要执行)
    #查看具体拉取失败的镜像
    $ kubectl get pods coredns-545d6fc579-27tml -n kube-system -o yaml | grep image:
      image: registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
    - image: registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
    #发现coredns:v1.8.0拉取失败,从docker hub官方拉取,官方没有v1.8.0版本,拉取1.8.0版本
    $ docker pull coredns/coredns:1.8.0
    #手动修改镜像名称,重新打tag
    $ sudo docker tag 296a6d5035e2 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
    
    $ kubectl get pods -n kube-system
    
    NAME                                       READY   STATUS    RESTARTS   AGE
    calico-kube-controllers-74b8fbdb46-rfdx5   1/1     Running   0          12m
    calico-node-hpcf6                          1/1     Running   0          12m
    calico-node-l4c2s                          1/1     Running   0          12m
    calico-node-r2jlk                          1/1     Running   0          12m
    coredns-545d6fc579-27tml                   1/1     Running   0          16m
    coredns-545d6fc579-r96ck                   1/1     Running   0          16m
    etcd-k8s-master                            1/1     Running   0          16m
    kube-apiserver-k8s-master                  1/1     Running   0          16m
    kube-controller-manager-k8s-master         1/1     Running   0          16m
    kube-proxy-8g72k                           1/1     Running   0          16m
    kube-proxy-frm4q                           1/1     Running   0          16m
    kube-proxy-pbm2b                           1/1     Running   0          16m
    kube-scheduler-k8s-master                  1/1     Running   0          16m
    
    #为了kubectl的高可用,在部署阶段master与node都安装了kubectl,此时验证node节点调用kubectl时报错:
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    #执行:
    $ echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
    $ source ~/.bash_profile
    Config not found: /etc/kubernetes/admin.conf
    #将master下/etc/kubernetes/admin.conf文件拷贝至各node节点对应位置
    $ kubectl get node
    NAME         STATUS   ROLES                  AGE   VERSION
    k8s-master   Ready    control-plane,master   46m   v1.21.0
    k8s-node1    Ready    <none>                 45m   v1.21.0
    k8s-node2    Ready    <none>                 45m   v1.21.0
    

    至此k8s已完成基本部署,若有不正确的地方,欢迎指出

你可能感兴趣的:(k8s,docker,k8s,分布式)