三台服务器:
10.2.103.19/20/21
在21上执行:
wget https://github.com/kubernetes-sigs/kubespray/archive/v2.13.1.tar.gz
tar -zxvf v2.13.1.tar.gz
sudo yum install -y epel-release python3-pip
sudo pip3 install -r requirements.txt
cp -rfp inventory/sample inventory/mycluster
declare -a IPS=( 10.2.103.19 10.2.103.20 10.2.103.21)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
查看自动生成的hosts.yaml,kubespray会根据提供的节点数量自动规划节点角色。这里部署2个master节点,同时3个节点也作为node,3个节点也用来部署etcd
[root@node1 kubespray-2.13.1]# cat inventory/mycluster/hosts.yaml
all:
hosts:
node1:
ansible_host: 10.2.103.19
ip: 10.2.103.19
access_ip: 10.2.103.19
node2:
ansible_host: 10.2.103.20
ip: 10.2.103.20
access_ip: 10.2.103.20
node3:
ansible_host: 10.2.103.21
ip: 10.2.103.21
access_ip: 10.2.103.21
children:
kube-master:
hosts:
node1:
node2:
kube-node:
hosts:
node1:
node2:
node3:
etcd:
hosts:
node1:
node2:
node3:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
修改全局环境变量(默认即可)
cat inventory/mycluster/group_vars/all/all.yml
默认安装版本较低,指定kubernetes版本
# vim inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
kube_version: v1.18.3
配置ssh免密,kubespray ansible节点对所有节点免密
在所有节点执行:ssh-keygen -t rsa
然后把生成的id_rsa.pub上传需要免密的服务器.ssh下面
执行 cat id_rsa.pub >> authorized_keys 把信息追加进去就ok了
cd /home/admin/kubespray-2.13.1
执行:
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root /home/admin/kubespray-2.13.1/cluster.yml
如果报错如下:
Pull k8s.gcr.io/k8s-dns-node-cache:1.15.1 required is: True
类似于这种,可以自己预先在各服务器上下载好,打好相应的标签
sudo docker pull registry.cn-hangzhou.aliyuncs.com/k8s-arthur/k8s-dns-node-cache:1.15.1
sudo docker tag registry.cn-hangzhou.aliyuncs.com/k8s-arthur/k8s-dns-node-cache:1.15.1 k8s.gcr.io/k8s-dns-node-cache:1.15.1
sudo docker rmi registry.cn-hangzhou.aliyuncs.com/k8s-arthur/k8s-dns-node-cache:1.15.1
查看状态:
kubectl get nodes -o wide
如果报错:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
在master上执行:
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
cp -p $HOME/.bash_profile $HOME/.bash_profile.bak$(date '+%Y%m%d%H%M%S')
echo "export KUBECONFIG=$HOME/.kube/config" >> $HOME/.bash_profile
source $HOME/.bash_profile
再执行: kubectl get nodes
node上的解决方案:
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/kubelet.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
cp -p $HOME/.bash_profile $HOME/.bash_profile.bak$(date '+%Y%m%d%H%M%S')
echo "export KUBECONFIG=$HOME/.kube/config" >> $HOME/.bash_profile
source $HOME/.bash_profile
清理集群:
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root reset.yml
说明:几个重要配置文件
inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
inventory/mycluster/group_vars/all/all.yml
roles/download/defaults/main.yml
可以在main.yml中修改相应镜像源
可以用scale.yml增加节点
用remove-node.yml删除节点