VirtualBox + Vagrant + Ansible快速搭建K8s(1.13.0)集群

文章目录

  • VirtualBox + Vagrant + Ansible快速搭建K8s(1.13.0)集群
    • 目标
    • 开始前准备
    • 创建虚拟机群组
      • 虚拟机配置参数
      • 虚拟机创建
      • 主机免密ssh登陆虚拟机
    • 搭建K8s集群
      • 注册虚拟机
      • 创建私有仓库Docker Register
      • K8s集群节点准备
        • Master&Worker节点
        • Master节点
        • Worker节点
      • Just GO!!!

VirtualBox + Vagrant + Ansible快速搭建K8s(1.13.0)集群

目标

  • 一键创建虚拟服务器群组
    • 共4台虚拟机,三台建k8s,一台搭docker私有仓库
  • 一键搭建K8s集群
    • 1 master
    • 2 workers

集群方案:

发行版:CentOS 7
容器运行时:Docker-18.06.1-ce
内核: 4.20.2-1.el7.elrepo.x86_64
版本:Kubernetes: 1.13.0
网络方案: Calico
kube-proxy mode: IPVS

开始前准备

  • 硬件
    • 一台可联网主机(配置尽可能高),创建虚拟机群组,本次案例采用的是macOS主机,6cpus/16G,压力不大
  • 软件
    • VirtualBox: 6.0.0-127566-OSX
    • Vagrant: 2.2.3_x86_64
  • 依赖资源(可先行下载下来后面需要使用)
    • docker-ce rpm包。网络环境OK的无所谓了。
      • docker-ce-18.06.1.ce-3.el7.x86_64.rpm
    • Vagrant的centos7的box,vagrant是基于box创建虚拟机,类似docker中基于image创建container。虚拟机box尽量保持一致,否则会出现各种问题!
      • CentOS-7-x86_64-Vagrant-1811_02.VirtualBox.box
    • k8s组件的rpm包及docker镜像。因大部分都在国外,不可FQ的同学可直接下载备用。这里要感谢大牛lentil1016的教程和资源!
      • k8s-v1.13.0-rpms.tgz 密码:4x77
      • k8s-repo-1.13.0 密码:aqq6

创建虚拟机群组

虚拟机的创建依靠VirtualBox和Vagrant配合完成的,安装VirtualBox和Vagrant过程不多说,直接官网走起!下面介绍一下怎么通过Vagrantfile创建虚拟机。

虚拟机配置参数

本案例我通过4台虚拟机来完成,各虚拟机配置如下:

hostname ip cpu memory
k8s-master-01 10.110.111.111 2 2048
k8s-worker-01 10.110.111.112 2 2048
k8s-worker-02 10.110.111.113 2 2048
k8s-docker-register 10.110.111.120 2 1024

虚拟机创建

我们直接在一个Vagrantfile中配置所有的虚拟机:

Vagrant.configure("2") do |config|
  config.vm.define "k8s-master-01" do |master_01|
  
  end
  config.vm.define "k8s-worker-01" do |worker_01|
  
  end
  config.vm.define "k8s-worker-02" do |worker_02|
  
  end
  config.vm.define "k8s-docker-register" do |docker|
  
  end
end

其中Vagrant.configure("2")为固定格式,不需修改。后面的do和最后的end组成一个代码单元块;|config|中的config可以理解为是Vagrant.configure("2")对象的实例;config.vm.define即定义新的虚拟机“k8s-master-01”,同样以do end为单元块,所有虚拟机配置相似,下面均以‘k8s-master-01’为例来说明。

首先需要指定虚拟机的操作系统,添加上面下载的CentOS-7-x86_64-Vagrant-1811_02.VirtualBox.box到vagrant box列表中:
vagrant box add path/to/your/CentOS-7-x86_64-Vagrant-1811_02.VirtualBox.box --name centos/7

这样在Vagrantfile中通过指定box为centos/7来创建虚拟机:

Vagrant.configure("2") do |config|
  config.vm.define "k8s-master-01" do |master_01|
    master_01.vm.box = "centos/7"
  end
end

然后配置hostname、ip、cpu、memory,其中ip通过私有网络配置得到,且需配置在与你主机同一网段,否则主机无法直接通过ssh连接上虚拟机(vagrant ssh可以直接连接):

Vagrant.configure("2") do |config|
  config.vm.define "k8s-master-01" do |master_01|
    master_01.vm.box = "centos/7"
    master_01.vm.hostname = "k8s-master-01"
  master_01.vm.network "private_network", ip: "10.110.111.111"
    master_01.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.cpus = 2
    end
  end
end

针对不同版本的centos7的vagrant box,创建出来的虚拟机基本配置环境有不同程度的差别,就本案例创建出来的虚拟机默认禁止root登陆的相关项,所以还需要修改sshd_config相关配置:

$change_sshd_config = <<-SCRIPT
echo change sshd_config to allow public key authentication & relaod sshd...
sed -i 's/\#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config
sed -i 's/\#PubkeyAuthentication yes/PubkeyAuthentication yes/' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl reload sshd
SCRIPT

Vagrant.configure("2") do |config|

  config.vm.define "k8s-master-01" do |master_01|
    master_01.vm.box = "centos/7"
    master_01.vm.hostname = "k8s-master-01"
    master_01.vm.network "private_network", ip: "10.110.111.111"
    master_01.vm.provision "shell", inline: $change_sshd_config #调用脚本,修改sshd_config配置
    master_01.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.cpus = 2
    end
  end
end

完整的Vagrantfile如下:

$change_sshd_config = <<-SCRIPT
echo change sshd_config to allow public key authentication & relaod sshd...
sed -i 's/\#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config
sed -i 's/\#PubkeyAuthentication yes/PubkeyAuthentication yes/' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl reload sshd
SCRIPT

Vagrant.configure("2") do |config|
  config.vm.provision "shell", inline: "echo Hello"
  config.vm.network "public_network", bridge: "en0: Wi-Fi (Wireless)"

  config.vm.define "k8s-master-01" do |master_01|
    master_01.vm.box = "centos/7"
    master_01.vm.hostname = "k8s-master-01"
    master_01.vm.network "private_network", ip: "10.110.111.111"
    master_01.vm.provision "shell", inline: $change_sshd_config
    master_01.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.cpus = 2
    end
  end

  config.vm.define "k8s-worker-01" do |worker_01|
    worker_01.vm.box = "centos/7"
    worker_01.vm.hostname = "k8s-worker-01"
    worker_01.vm.network "private_network", ip: "10.110.111.112"
    worker_01.vm.provision "shell", inline: $change_sshd_config
    worker_01.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.cpus = 2
    end
  end

  config.vm.define "k8s-worker-02" do |worker_02|
    worker_02.vm.box = "centos/7"
    worker_02.vm.hostname = "k8s-worker-02"
    worker_02.vm.network "private_network", ip: "10.110.111.113"
    worker_02.vm.provision "shell", inline: $change_sshd_config
    worker_02.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.cpus = 2
    end
  end

  config.vm.define "k8s-docker-register" do |docker|
    docker.vm.box = "centos/7"
    docker.vm.hostname = "k8s-docker-register"
    docker.vm.network "private_network", ip: "10.110.111.120"
    docker.vm.provision "shell", inline: $change_sshd_config
    docker.vm.provider "virtualbox" do |v|
      v.memory = 1024
      v.cpus = 2
    end
  end
end

其中config.vm.network "public_network", bridge: "en0: Wi-Fi (Wireless)"添加了公共网络,用于外网访问虚拟机,可根据需求自行添加。
自此,创建虚拟机群组的Vagrantfile完成,添加到工作目录~/vagrant/vm/k8s,执行vagrant up创建虚拟机:

macbook-pro:k8s jason$ vagrant up
Bringing machine 'k8s-master-01' up with 'virtualbox' provider...
Bringing machine 'k8s-worker-01' up with 'virtualbox' provider...
Bringing machine 'k8s-worker-02' up with 'virtualbox' provider...
Bringing machine 'k8s-docker-register' up with 'virtualbox' provider...
==> k8s-master-01: Importing base box 'centos/7'...
==> k8s-master-01: Matching MAC address for NAT networking...
==> k8s-master-01: Setting the name of the VM: k8s_k8s-master-01_1547193895558_14331
==> k8s-master-01: Fixed port collision for 22 => 2222. Now on port 2200.
==> k8s-master-01: Clearing any previously set network interfaces...
==> k8s-master-01: Preparing network interfaces based on configuration...
    k8s-master-01: Adapter 1: nat
    k8s-master-01: Adapter 2: bridged
    k8s-master-01: Adapter 3: hostonly
==> k8s-master-01: Forwarding ports...
    k8s-master-01: 22 (guest) => 2200 (host) (adapter 1)
==> k8s-master-01: Running 'pre-boot' VM customizations...
==> k8s-master-01: Booting VM...
==> k8s-master-01: Waiting for machine to boot. This may take a few minutes...
    k8s-master-01: SSH address: 127.0.0.1:2200
    k8s-master-01: SSH username: vagrant
    k8s-master-01: SSH auth method: private key
    k8s-master-01: 
    k8s-master-01: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-master-01: this with a newly generated keypair for better security.
    k8s-master-01: 
    k8s-master-01: Inserting generated public key within guest...
    k8s-master-01: Removing insecure key from the guest if it's present...
    k8s-master-01: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-master-01: Machine booted and ready!
==> k8s-master-01: Checking for guest additions in VM...
    k8s-master-01: No guest additions were detected on the base box for this VM! Guest
    k8s-master-01: additions are required for forwarded ports, shared folders, host only
    k8s-master-01: networking, and more. If SSH fails on this machine, please install
    k8s-master-01: the guest additions and repackage the box to continue.
    k8s-master-01: 
    k8s-master-01: This is not an error message; everything may continue to work properly,
    k8s-master-01: in which case you may ignore this message.
==> k8s-master-01: Setting hostname...
==> k8s-master-01: Configuring and enabling network interfaces...
==> k8s-master-01: Rsyncing folder: /Users/jason/Workspace/Vagrant/vm/k8s/ => /vagrant
==> k8s-master-01: Running provisioner: shell...
    k8s-master-01: Running: inline script
    k8s-master-01: Hello
==> k8s-master-01: Running provisioner: shell...
    k8s-master-01: Running: inline script
    k8s-master-01: change sshd_config to allow public key authentication...
...省略其他log
...
macbook-pro:k8s jason$ vagrant status
Current machine states:

k8s-master-01             running (virtualbox)
k8s-worker-01             running (virtualbox)
k8s-worker-02             running (virtualbox)
k8s-docker-register      running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

主机免密ssh登陆虚拟机

首先配置DNS解析到自定义的虚拟机ip上,添加下面内容到宿主机的/etc/hosts中:

10.110.111.111  k8s-master-01
10.110.111.112  k8s-worker-01
10.110.111.113  k8s-worker-02
10.110.111.120  k8s-docker-register

在宿主机上通过ssh-keygen生成ssh公私密钥对,使用sshc-copy-id root@k8s-master-01按照提示输入yes,输入vagrant虚拟机默认root密码vagrant,添加主机ssh公钥到虚拟机master节点。添加成功通过ssh root@k8s-master-01登陆到虚拟机,其他虚拟机使用相同的操作完成ssh免密登陆。

macbook-pro:k8s jason$ ssh-copy-id root@k8s-master-01 
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/jason/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-master-01's password: 

Number of key(s) added:        1

Now try logging into the machine, with:   "ssh 'root@k8s-master-01'"
and check to make sure that only the key(s) you wanted were added.

至此,虚拟机群组创建完成,做个快照方便后边k8s集群创建中途失败后恢复vagrant snapshot save VM_NAME SNAPSHOT_NAME

...省略了master、worker的快照,以k8s-docker-register虚拟机做快照为例
New-Image-3:k8s jason$ vagrant snapshot save k8s-docker-register dr-1
==> k8s-docker-register: Snapshotting the machine as 'dr-1'...
==> k8s-docker-register: Snapshot saved! You can restore the snapshot at any time by
==> k8s-docker-register: using `vagrant snapshot restore`. You can delete it using
==> k8s-docker-register: `vagrant snapshot delete`.
New-Image-3:k8s jason$ vagrant snapshot list
m01-1   #k8s-master-01快照
w01-1   #k8s-worker-01快照
w02-1   #k8s-worker-02快照
dr-1    #k8s-docker-register快照

搭建K8s集群

注册虚拟机

其实就是将虚拟机的ip添加到自定义的hosts文件中,让ansible识别虚拟机:

vi ~/Vagrant/vm/k8s/hosts

[masters]
master-01 ansible_host=10.110.111.111 ansible_user=root

[workers]
worker-01 ansible_host=10.110.111.112 ansible_user=root
worker-02 ansible_host=10.110.111.113 ansible_user=root

[docker-register]
docker-register ansible_host=10.110.111.120 ansible_user=root

创建私有仓库Docker Register

创建docker私有仓库,可以自己ssh进入虚拟机手动创建。这里为了实现“自动化”,将和创建k8s一起使用ansible来创建,强行自动化哈哈-。-。实际应用的过程中不建议将不同的操作塞到一个ansible playbook中,做好分类,易于阅读也方便管理及维护。

- hosts: docker-register # 指定目标操作主机为docker-register虚拟机
  become: yes 
  tasks:  # 在被操作主机上执行的任务
  	- name: 拷贝docker-ce-18.06
      copy:
        src:  path/to/your/docker-ce-18.06.1.ce-3.el7.x86_64.rpm
        dest: $HOME
        mode: 0744

    - name: 安装docker-ce-18.06,通过清华镜像加速安装
      yum:
        name: /root/docker-ce-18.06.1.ce-3.el7.x86_64.rpm
        state:  present

    - name: 创建/etc/docker/daemon.json,并添加阿里云加速镜像
      lineinfile:
        path: /etc/docker/daemon.json
        line: '{"registry-mirrors": ["https://5cj8vui1.mirror.aliyuncs.com"]}'
        create: yes

    - name: 重载daemon、开启docker、设置docker开机重启
      systemd:
        name: docker
        daemon_reload: yes
        state:  started
        enabled:  yes

    - name: 复制k8s-repo-1.13.0到虚拟机
      copy:
        src: /Users/jason/Downloads/software/k8s-repo-1.13.0
        dest: $HOME
        mode: 0744

    - name: 载入docker镜像仓库k8s-repo-1.13.0
      command:  docker load -i $HOME/k8s-repo-1.13.0

    - name: 运行k8s-repo-1.13.0,并映射到虚拟机80端口
      command:  docker run --restart=always -d -p 80:5000 --name repo harbor.io:1180/system/k8s-repo:v1.13.0

K8s集群节点准备

在master和worker节点上需要关闭防火墙(或打开部分端口)、关闭系统Swap、配置网桥规则、同步时间、升级内核、重启更新内核、开启IPVS、安装docker-ce、配置DNS解析将部分url解析到docker私有仓库上:

Master&Worker节点

- hosts: ['masters', 'workers']
  become: yes
  tasks:
    - name: 关闭防火墙、禁止开机重启
      systemd:
        name: firewalld
        state:  stopped
        enabled:  no

    - name: Disable selinux
      command:  setenforce 0

    - name: Disable SELinux on reboot
      selinux:
        state: disabled

    - name: 加载br_netfilter模块
      command:  modprobe br_netfilter

    - name: 允许网桥转发数据包时被ip6tables的FORWARD规则过滤
      sysctl:
        name: net.bridge.bridge-nf-call-ip6tables
        value: 1
        state: present

    - name: 允许网桥转发数据包时被iptables的FORWARD规则过滤
      sysctl:
        name: net.bridge.bridge-nf-call-iptables
        value: 1
        state: present

    - name: 关闭swap
      command:  swapoff -a

    - name: 备份 /etc/fstab
      command:  cp /etc/fstab /etc/fstab_bak

    - name: 删除 swap
      lineinfile: 
        path: /etc/fstab
        state:  absent
        regexp: "swap" 

    - name: 安装ntpdate
      yum:
        name: ntpdate
        state:  present
        update_cache: true

    - name: 同步time
      command: ntpdate -u ntp.api.bz

    - name: 使用ELRepo源
      command: rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

    - name: 升级内核
      yum:
        name: kernel-ml
        state: present
        enablerepo: elrepo-kernel

    - name: 设置重启内核默认为第0个(0为4.20的内核)
      command:  grub2-set-default 0

    - name: 生成grub2配置文件
      command:  grub2-mkconfig -o /boot/grub2/grub.cfg

    - name: 重启
      reboot:

    - name: 复制ipvs.modules到/etc/sysconfig/modules/,并授予可执行权限
      copy:
        src:  path/to/your/ipvs.modules
        dest: /etc/sysconfig/modules/ipvs.modules
        mode: 0755

    - name: 执行脚本以确保加载IPVS模块
      shell:  "/bin/bash /etc/sysconfig/modules/ipvs.modules"

    - name: 安装相关依赖
      yum:
        name: "{{ packages }}"
      vars:
        packages:
        - yum-utils
        - device-mapper-persistent-data
        - lvm2
        - socat
        - ipvsadm
        state:  present
        update_cache: true

  	- name: 拷贝docker-ce-18.06
      copy:
        src:  path/to/your/docker-ce-18.06.1.ce-3.el7.x86_64.rpm
        dest: $HOME
        mode: 0744

    - name: 安装docker-ce-18.06,通过清华镜像加速安装
      yum:
        name: /root/docker-ce-18.06.1.ce-3.el7.x86_64.rpm
        state:  present

    - name: 打开docker防火墙iptables规则
      lineinfile:
        path: /usr/lib/systemd/system/docker.service
        insertbefore: '^ExecReload'
        line: "ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT"
    
    - name: 创建/etc/docker/daemon.json添加'insecure-registries'和'registry-mirrors'
      lineinfile:
        path: /etc/docker/daemon.json
        line: '{"insecure-registries":["k8s.gcr.io", "gcr.io", "quay.io"],"registry-mirrors": ["https://5cj8vui1.mirror.aliyuncs.com"]}'
        create: yes

    - name: 配置k8s.gcr.io gcr.io quay.io(ip在墙外)的DNS解析到k8s-docker-register私有仓库上
      lineinfile:
        path: /etc/hosts
        insertbefore: '^127.0.0.1'
        line: "10.110.111.120   k8s.gcr.io gcr.io quay.io"

    - name: 重载daemon、开启docker、设置docker开机重启
      systemd:
        name: docker
        daemon_reload: yes
        state:  started
        enabled:  yes

    - name: 复制、解压k8s
      unarchive:
        src:  path/to/your/k8s-v1.13.0-rpms.tgz
        dest: /root

    - name: 安装kubelet、kubeadm、kubectl(worker节点上不需要安装,以后可自行删除)
      command:  rpm -Uvh /root/k8s-v1.13.0/* --force

    - name: Enabled kubelet
      systemd:
        name: kubelet
        enabled:  yes

其中ipvs.modules文件内容如下:

#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
    /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
    if [ $? -eq 0 ]; then
        /sbin/modprobe ${kernel_module}
    fi
done

Master节点

- hosts: masters
  become: yes
  tasks:
    - name: 安装kubelet、kubectl、kubeadm
      command:  rpm -Uvh /root/k8s-v1.13.0/* --force

    - name: Enabled kubelet
      systemd:
        name: kubelet
        enabled:  yes

    - name: 拷贝kubeadm-config.yaml
      copy: 
        src:  path/to/your/kubeadm-config.yaml
        dest: /etc/kubernetes/kubeadm-config.yaml
        mode: 0666

    - name: 初始化k8s
      shell:  kubeadm init --config /etc/kubernetes/kubeadm-config.yaml >> cluster_initialized.txt
      args:
        chdir:  $HOME
        creates:  cluster_initialized.txt

    - name: 新建.kube目录
      become: yes
      file:
        path: $HOME/.kube
        state: directory
        mode: 0755

    - name: 拷贝 admin.conf到.kube/config
      copy:
        src: /etc/kubernetes/admin.conf
        dest: $HOME/.kube/config
        remote_src: yes

    - name: 创建RBAC部署
      become: yes
      shell:  kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml >> rbac_apply.txt
      args:
        chdir: $HOME
        creates: rbac_apply.txt

    - name: 安装Calico网络
      become: yes
      shell:  kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml >> pod_network_setup.txt
      args:
        chdir: $HOME
        creates: pod_network_setup.txt

    - name: 生成worker节点加入master命令
      shell:  kubeadm token create --print-join-command
      register: join_command_raw

    - name: 设置join_command命令变量
      set_fact:
        join_command: "{{ join_command_raw.stdout_lines[0] }}"

其中kubeadm-config.yaml指定kubeadm初始化配置项,包括k8s版本、api-server的启动ip及端口,内容如下:

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
controlPlaneEndpoint: "10.110.111.111:6443"
apiServer:
  certSANs:
  - "10.110.111.111"
networking:
  podSubnet: "192.168.0.0/16"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

Worker节点

- hosts:  workers
  become: yes
  tasks:
    - name: 删除kubectl
      file:
        path: /root/k8s-v1.13.0/**kubectl**
        state:  absent

    - name: 安装kubelet、kubeadm
      command:  rpm -Uvh /root/k8s-v1.13.0/* --force

    - name: Enabled kubelet
      systemd:
        name: kubelet
        enabled:  yes

    - name: 加入master
      shell: "{{ hostvars['master-01'].join_command }} >> node_joined.txt"
      args:
        chdir: $HOME
        creates: node_joined.txt

Just GO!!!

至此,将上述配置写入一个yaml文件中,替换掉其中的一些参数(path/to/your这类),使用ansible-playbook -i hosts just-go.yml就可以愉快的“一键搭建”K8s集群啦!

ssh root@k8s-master-01
root@k8s-master-01 ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
k8s-master-01   Ready    master   16m   v1.13.0
k8s-worker-01   Ready    <none>   15m   v1.13.0
k8s-worker-02   Ready    <none>   15m   v1.13.0

如上,已经搭建了一个小型、可供个人测试的K8s集群。本案例所涉及代码以上传个人github上,可自行下载测试

你可能感兴趣的:(Kubernetes)