SuSE CaaS4.5 容错环境搭建v1

1. 环境概述

    3 Master Node + 2 Worker Node + 1Management Note + 1 Mirror Node + 1 RMT/SMT

    Management Note:

        用于管理CaaS平台,并附加部署负载均衡服务器

    Mirror Servers:

        包含Helm Chart Repositories、Container Registry

        Container Registries提供内网环境离线下载镜像

    RMT/SMT:

        提供系统升级软件包及CaaS搭建软件包

Network

2. 必要条件

k8s环境相关服务器/虚机CPU最小需要:2(v)CPU;

服务器命名要求符合:完全限定域名(FQDN);

禁用ipv6;

启用ip转发功能(net.ipv4.ip_forward = 1);

在集群平台引导之前,必须禁用swap;

时钟同步(NTP);

集群平台配置有效网关;

3. 准备步骤

3.1 hosts编辑

# vim /etc/hosts

/etc/hosts

3.2 内核参数

# vim /etc/sysctl.conf

/etc/sysctl.conf

3.3 关闭swap

# touch/etc/init.d/after.local

# chmod 744/etc/init.d/after.local

# vim/etc/init.d/after.local

swap

3.4 时钟同步设置

# sed -i '3i pool 192.168.55.131 iburst' /etc/chrony.conf

# systemctl enable chronyd.service

# systemctl restart chronyd.service

# systemctl status chronyd.service

3.5 网关设置

    设置:3 Master Node + 2 Worker Node + 1Management Note 

# echo "default192.168.55.1 - -" >> /etc/sysconfig/network/routes

# rcnetwork restart

3.6 软件库添加


zypper repo

3.7 更新系统补丁

# zypper dup

3.8 重启系统

# reboot

4. Mirror服务器配置

4.1 软件包安装

# zypper in docker helm-mirror skopeo

# systemctl enable --now docker.service   //启动服务并配置自启动

4.2 从SUSE提取Registry container image

# docker pull registry.suse.com/sles12/registry:2.6.2

镜像打包/导入步骤

# docker save -o /tmp/registry.tar registry.suse.com/sles12/registry:2.6.2

# docker load -i /tmp/registry.tar

4.3 Registry Container配置文件

# mkdir /etc/docker/registry/

# vim /etc/docker/registry/config.yml

config.yml

4.4 启动Registry Container

# docker run -d -p5000:5000 --restart=always --name registry \

 -v/etc/docker/registry:/etc/docker/registry:ro \

 -v /var/lib/registry:/var/lib/registryregistry.suse.com/sles12/registry:2.6.2

# docker ps -a


# docker stats

# docker start

# docker stop

4.5 配置Nginx webserver

# zypper install nginx

# vim /etc/nginx/vhosts.d/charts-server-http.conf

charts-server-http.conf

# systemctl enable--now nginx.service

4.6 更新CaaS平台构建镜像到Registry Mirror

https://documentation.suse.com/external-tree/en-us/suse-caasp/4/skuba-cluster-images.txt

或许在安装skuba软件包的服务器上执行:

# skuba cluster images

images

# mkdir /tmp/skuba-cluster-images

# vim /tmp/skuba-cluster-images/sync.yaml

skuba-cluster-images

# cd /tmp/skuba-cluster-images

# skopeo sync --src yaml --dest sync.yaml /tmp/skuba-cluster-images/ --scoped

# skopeo sync --dest-tls-verify=false --src dir --dest docker /tmp/skuba-cluster-images/ mirror.demo.com:5000 --scoped

4.7 Helm Charts数据获取及发布

4.7.1 从存储库下载所有charts到本地

# mkdir /tmp/charts

# cd /tmp/charts

# helm-mirror --new-root-url http://charts.demo.com/charts https://kubernetes-charts.suse.com /tmp/charts

/tmp/charts/

4.7.2 转换charts信息为skopeo格式

# helm-mirror inspect-images /tmp/charts/ -o skopeo=sync.yaml -i

调整转换后的文件:

删除重复版本信息及镜像信息

例如:

charts
sync1.yaml
sync2.yaml

注意:

    gcr.io 地址中的镜像需要翻墙

    在CaaS4.5中默认安装的是helm2,本文档会使用helm3替换helm2,所以不再需要tiller。相关镜像也不再需要下载。

4.7.3 下载charts库数据并发布到Registry Mirror

# mkdir /tmp/skopeodata

# skopeo sync --src yaml --dest dir sync.yaml /tmp/skopeodata/ --scoped

# skopeo sync --dest-tls-verify=false --src dir --dest docker /tmp/skopeodata/ mirror.demo.com:5000 --scoped

查看本地Registry 镜像内容

# curl mirror.demo.com:5000/v2/_catalog | tr "," "\n"

4.7.4 Helm chart数据拷贝到web家目录

# cp -a /tmp/charts/ /srv/www/charts/

# chown -R nginx:nginx /srv/www/charts

# chmod -R 555 /srv/www/charts

# systemctl restart nginx.service

5. Nginx负载均衡

    配置在management节点

5.1 nginx配置

#zypper -n in nginx

#vim /etc/nginx/nginx.conf

nginx.conf-1
nginx.conf-2

# systemctl enable --now nginx

# systemctl status nginx

5.2 校验负载均衡功能

    在部署CaaS后进行校验

management:~#cd /root/CaaS-Cluster

management:~#while true; do skuba cluster status; sleep 1; done;

management:~ # tail -100f /var/log/nginx/k8s-masters-lb-access.log

log

6. ssh-agent配置

    配置在management节点

6.1 生成密钥对

management:~ # ssh-keygen

management:~ # cd ~/.ssh

management:~ # ssh-copy-id [email protected]

management:~ # ssh-copy-id [email protected]

management:~ # ssh-copy-id [email protected]

management:~ # ssh-copy-id [email protected]

management:~ # ssh-copy-id [email protected]

management:~ # ssh-copy-id [email protected]

management:~ # ssh-copy-id [email protected]

6.2 启动ssh-agent服务

management:~# eval"$(ssh-agent -s)"

6.3 添加私钥到ssh-agent

management:~# ssh-add~/.ssh/id_rsa

management:~# ssh-add -l

7. CaaS搭建

7.1 组件安装

    在所有management / master / worker 节点安装

    # zypper -n in -l -t pattern SUSE-CaaSP-Management

7.2 配置cri-o连接Registry Container

    在所有management / master / worker 节点安装

    # zypper -n install cri-o-1.18

    # mv /etc/containers/registries.conf /etc/containers/registries.conf.backup

    # vim /etc/containers/registries.conf

registries.conf

7.3 CaaS初始化

management:~ # cd ~/

management:~ # skuba cluster init --control-plane management.demo.com CaaS-Cluster

7.4 初始化添加首个master节点

management:~ # cd ~/CaaS-Cluster/

management:~ # skuba node bootstrap --target master01.demo.com master01 -v5

7.5 添加其他节点

语法:

skuba node join --role --user --sudo --target

management:~ # skuba node join --role master --target master02.demo.com master02 -v5

management:~ # skuba node join --role master --target master03.demo.com master03 -v5

management:~ # skuba node join --role worker --target worker01.demo.com worker01 -v5

management:~ # skuba node join --role worker --target worker02.demo.com worker02 -v5

management:~ # skuba node join --role worker --target worker03.demo.com worker03 -v5

7.6 测试集群

management:~ # mkdir ~/.kube

management:~ # cp ~/CaaS-Cluster/admin.conf ~/.kube/config

management:~ # kubectl get nodes

display

management:~ # kubectl get nodes -o wide

nodes display

8. CaaS集群状态

8.1 当前节点下载的镜像

master01:~ # crictl images

master01-images

8.2 当前节点运行的容器

master01:~ # crictl ps -a

container

8.3 查看节点运行的Pods

master01:~ # crictl pods

pods display

8.4 查看容器日志

master01:~ # crictl logs d506b0fb5db13

container-logs

8.5 显示集群信息

management:~ # kubectl cluster-info

cluster-info

查看集群dump信息

management:~ # kubectl cluster-info dump | less

management:~ # kubectl version --short=true

cluster-version

8.6 显示资源信息

# kubectl --namespace=kube-systemget deployments -o wide

# kubectl get nodes-o wide

# kubectl get pods--all-namespaces -o wide

# kubectl get svc--all-namespaces

9. K8s stack

9.1 安装helm

# zypper in helm

从CassP 4.1.2版本开始,集群软件包含了helm,无需额外安装

9.2 替换helm2到helm3

# zypper in helm3

# update-alternatives --set helm /usr/bin/helm3

9.3 添加Mirror服务器的charts库

management:~ # helm repo add mirror-local http://charts.demo.com/charts

查看生成的库配置文件

management:~ # cat ~/.config/helm/repositories.yaml

repositories.yaml

查看charts库列表

# helm repo list

charts-repo

更新库数据

# helm repo update

charts-update

列出charts repo中的内容

# helm search repo

chart-date

附录A、常见chart库

● 微软chart仓库

http://mirror.azure.cn/kubernetes/charts/

● 阿里chart仓库

https://apphub.aliyuncs.com/

https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

官方网站:https://developer.aliyun.com/hub#/?_k=bfaiyc

● k8s官方chart仓库

https://hub.kubeapps.com/charts/incubator

● SUSE chart仓库

https://kubernetes-charts.suse.com

● 谷歌chart仓库

http://storage.googleapis.com/kubernetes-charts-incubator

附录B、重置CaaS搭建痕迹

swapoff -a

kubeadm reset

systemctl daemon-reload

systemctl unmask kubelet.service

systemctl restart kubelet

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

附录C、node加入错误处理

master和worker节点在加入k8s成功后,若爆出异常错误,需要逐步重启master和worker节点即可。

你可能感兴趣的:(SuSE CaaS4.5 容错环境搭建v1)