Ansible自动化部署kubernetes集群

机器环境介绍

1.1. 机器信息介绍

IP

hostname

application

CPU

Memory

192.168.204.129

k8s-master01

etcd,kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,containerd

2C

4G

192.168.204.130

k8s-worker01

etcd,kubelet,kube-proxy,containerd

2C

4G

192.168.204.131

k8s-worker02

etcd,kubelet,kube-proxy,containerd

2C

4G

1.2. 规划IP地址介绍

在Kubernetes中CNI网络插件采用Calico,划分三个网段

网段信息

配置

Pod网段

172.16.0.0/16

Service网段

10.96.0.0/16

安装的kubernetets版本为1.28.5,Calico版本为3.26.4,容器运行环境为containerd

如果需要其他版本kuberneres,需要修改下面的脚本

  • 修改kubernetes源里面的版本
  • 修改安装master和worker节点里面定义的版本变量值

如下需要使用其他版本的CNI插件或者不同版本的calico插件,需要对网络插件部分脚本进行修改

安装配置ansible

2.1. ansible软件部署

  • 安装ansible软件
apt update && apt install ansible -y
  • 配置ansible配置
mkdir /etc/ansible/ && touch /etc/ansible/hosts
  • 配置/etc/ansible/hosts文件

[master]
192.168.204.129

[worker]
192.168.204.130
192.168.204.131
  • 配置免密登录, 此过程中不要输入密码
ssh-keygen -t rsa
  • 分发免密登录
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]
  • 配置hosts
cat >> /etc/hosts <

2.2. 测试ansible连接性

  • 编写测试脚本
cat >test_nodes.yml <
  • 执行ansible测试

  • ansible-playbook test_node.yml

配置kubernetes脚本

3.1. 编写的kubernetes 脚本

  • 编写的install-kubernetes.yml文件内容如下
---
- name: Performance Basic Config
  hosts: 
  	master
    worker
  become: yes
  tasks:
    - name: Check if fstab contains swap
      shell: grep -q "swap" /etc/fstab
      register: fstab_contains_swap

    - name: Temp Disable swap
      command: swapoff -a
      when: fstab_contains_swap.rc == 0

    - name: Permanent Disable swap
      shell: sed -i 's/.*swap.*/#&/g' /etc/fstab
      when: fstab_contains_swap.rc == 0

    - name: Disable Swap unit-files
      shell: |
        swap_units=$(systemctl list-unit-files | grep swap | awk '{print $1}')
        for unit in $swap_units; do
          systemctl mask $unit
        done

    - name: Stop UFW service
      service:
        name: ufw
        state: stopped

    - name: Disable UFW at boot
      service:
        name: ufw
        enabled: no

    - name: Set timezone
      shell: TZ='Asia/Shanghai'; export TZ

    - name: Set timezone permanently
      shell: |
        cat >> /etc/profile << EOF
        TZ='Asia/Shanghai'; export TZ
        EOF

    - name: Create .hushlogin file in $HOME
      file:
        path: "{{ ansible_env.HOME }}/.hushlogin"
        state: touch

    - name: Install required packages
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - apt-transport-https
          - ca-certificates
          - curl
          - gnupg
          - lsb-release

    - name: Add Aliyun Docker GPG key
      shell: curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add

    - name: Add Aliyun Docker repository
      shell: echo "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker-ce.list

    - name: Add Aliyun Kubernetes GPG key
      shell: curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

    - name: Add Aliyun Kubernetes repository
      shell: echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list

    - name: Set apt sources to use USTC mirrors
      shell: sed -i 's#cn.archive.ubuntu.com#mirrors.aliyun.com#g' /etc/apt/sources.list

    - name: Update apt cache
      apt:
        update_cache: yes

    - name: Load br_netfilter on start
      shell: echo "modprobe br_netfilter" >> /etc/profile

    - name: Load br_netfilter
      shell: modprobe br_netfilter

    - name: Update sysctl settings
      sysctl:
        name: "{{ item.name }}"
        value: "{{ item.value }}"
        state: present
        reload: yes
      with_items:
        - { name: "net.bridge.bridge-nf-call-iptables", value: "1" }
        - { name: "net.bridge.bridge-nf-call-ip6tables", value: "1" }
        - { name: "net.ipv4.ip_forward", value: "1" }

    - name: Install IPVS
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - ipset
          - ipvsadm

    - name: Create ipvs modules
      file:
        name: /etc/modules-load.d/ipvs.modules
        mode: 0755
        state: touch

    - name: Write ipvs.modules file
      lineinfile:
        dest: /etc/modules-load.d/ipvs.modules
        line: "#!/bin/bash\nmodprobe -- ip_vs\nmodprobe -- ip_vs_rr\nmodprobe -- ip_vs_wrr\nmodprobe -- ip_vs_sh\nmodprobe -- nf_conntrack\nmodprobe -- overlay\nmodprobe -- br_netfilter"

    - name: Execute ipvs.modules script
      shell: sh /etc/modules-load.d/ipvs.modules

    - name: Install Containerd
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - containerd.io

    - name: Generate default containerd file
      shell: containerd config default > /etc/containerd/config.toml

    - name: Config sandbox image
      shell: sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#g' /etc/containerd/config.toml

    - name: Modify Systemd Cgroup
      shell: sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml

    - name: Restart Containerd
      shell: systemctl restart containerd

    - name: Systemctl enable containerd
      shell: systemctl enable containerd

- name: Install Kubernetes Master
  hosts: master
  become: yes
  vars:
    kubernetes_version: "1.28.5"
    pod_network_cidr: "172.16.0.0/16"
    service_cidr: "10.96.0.0/16"
    image_repository: "registry.aliyuncs.com/google_containers"
    calico_version: "v3.26.4"
  tasks:
    - name: Install Master kubernetes packages
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - kubelet={{ kubernetes_version }}-1.1
          - kubeadm={{ kubernetes_version }}-1.1
          - kubectl={{ kubernetes_version }}-1.1

    - name: Initialize Kubernetes Master
      command: kubeadm init --kubernetes-version={{ kubernetes_version }} --pod-network-cidr={{ pod_network_cidr }} --service-cidr={{ service_cidr }} --image-repository={{ image_repository }}
      register: kubeadm_output
      changed_when: "'kubeadm join' in kubeadm_output.stdout"

    - name: Save join command
      copy:
        content: |
          {{ kubeadm_output.stdout_lines [-2] }}
          {{ kubeadm_output.stdout_lines [-1] }}
        dest: /root/kubeadm_join_master.sh
      when: kubeadm_output.changed

    - name: cope join master script
      shell: sed -i 's/"//g' /root/kubeadm_join_master.sh

    - name: copy kubernetes config
      shell: mkdir -p {{ ansible_env.HOME }}/.kube && cp -i /etc/kubernetes/admin.conf {{ ansible_env.HOME }}/.kube/config

    - name: enable kubectl
      command: systemctl enable kubelet

    - name: Create calico directory
      file:
        path: "{{ ansible_env.HOME }}/calico/{{ calico_version }}"
        state: directory

    - name: download calico tigera-operator.yaml
      command: wget https://ghproxy.net/https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/tigera-operator.yaml -O {{ ansible_env.HOME }}/calico/{{ calico_version }}/tigera-operator.yaml

    - name: download calico custom-resources.yaml
      command: wget https://ghproxy.net/https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/custom-resources.yaml -O {{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml

    - name: set calico netwok range
      replace:
        path: "{{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml"
        regexp: "blockSize: 26"
        replace: "blockSize: 24"

    - name: set calico ip pools
      replace:
        path: "{{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml"
        regexp: "cidr: 192.168.0.0/16"
        replace: "cidr: {{ pod_network_cidr }}"

    - name: apply calico tigera-operator.yaml
      command: kubectl create -f {{ ansible_env.HOME }}/calico/{{ calico_version }}/tigera-operator.yaml

    - name: apply calico custom-resources.yaml
      command: kubectl create -f {{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml

    - name: set crictl config
      command: crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

- name: Install Kubernetes worker
  hosts: worker
  become: yes
  vars:
    kubernetes_version: "1.28.5"
  tasks:
    - name: Install worker kubernetes packages
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - kubelet={{ kubernetes_version }}-1.1
          - kubeadm={{ kubernetes_version }}-1.1

    - name: copy kubeadm join script to workers
      copy:
        src: /root/kubeadm_join_master.sh
        dest: /root/kubeadm_join_master.sh
        mode: 0755

    - name: worker join to cluster
      command: sh /root/kubeadm_join_master.sh

    - name: set crictl config
      command: crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

    - name: enable kubectl
      command: systemctl enable kubelet

执行kubernetes脚本 

 ansible-playbook install-kubernetes.yml

  • 集群状态

kubectl get node -o wide

  • 集群pod状态

kubectl get pod -A

你可能感兴趣的:(kubernetes,ansible,自动化)