K8s 环境搭建 aarch64 (arm结构)

安装要求

  • 至少两台服务器,操作系统为CentOS7.X-aarch64
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像(或者提前准备好镜像)

环境介绍:

Linux version 4.19.90-17.5.ky10.aarch64 ([email protected]) (gcc version 7.3.0 (GCC)) #1 SMP Fri Aug 7 13:35:33 CST 2020

主机
IP: 华为云 k8s master 119.3.225.139  主机名:master 系统:Linux version 4.19.90-17.5.ky10.aarch64
 
IP: 华为云 k8s node1 119.3.212.217 主机名:node1 系统: Linux version 4.19.90-17.5.ky10.aarch64
IP: 华为云 k8s node2 139.9.129.2 主机名:node2 系统: Linux version 4.19.90-17.5.ky10.aarch64
IP: 华为云 k8s node3 114.116.230.233 主机名:node3 系统: Linux version 4.19.90-17.5.ky10.aarch64
IP: 华为云 k8s node4 124.71.226.71 主机名:node4 系统: Linux version 4.19.90-17.5.ky10.aarch64
IP: 华为云 k8s node5 119.3.225.139 主机名:node5 系统: Linux version 4.19.90-17.5.ky10.aarch64
更改所有节点的主机名,便于区分(各自执行):
#master节点 
hostnamectl set-hostname k8s-master
#node1节点
hostnamectl set-hostname k8s-node1
#node2节点
hostnamectl set-hostname k8s-node2
#node3节点
hostnamectl set-hostname k8s-node3
#node4节点
hostnamectl set-hostname k8s-node4
#node5节点
hostnamectl set-hostname k8s-node5


执行 如下命令查看

hostname 


安装步骤

所以节点关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

所有节点关闭selinux

sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0


所有节点关闭swap

swapoff -a  # 临时关闭
cat /etc/fstab 注释到swap那一行 # 永久关闭
sed -i 's/.*swap.*/#&/g' /etc/fstab

所有节点添加主机名与IP对应关系(所有机器执行)

cat >> /etc/hosts << EOF
192.168.1.53 k8s-master
192.168.1.2 k8s-node1
192.168.1.24 k8s-node2
192.168.1.91 k8s-node3
192.168.1.92 k8s-node4
192.168.1.114 k8s-node5
EOF

同步时间(可选)

yum install ntpdate -y
ntpdate  ntp.api.bz

将桥接的IPv4流量传递到iptables的链(所有机器执行)

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system


所有节点添加阿里云YUM源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#清除缓存
yum clean all

#把服务器的包信息下载到本地电脑缓存起来,makecache建立一个缓存
yum makecache

#列出kubectl可用的版本
yum list kubectl --showduplicates | sort -r

所有节点安装docker

Docker 离线安装_ 说不如做-CSDN博客

所有节点安装kubeadm,kubelet和kubectl

这里指定了版本号,若需要其他版本的可自行更改

yum install -y kubelet-1.14.8 kubeadm-1.14.8 kubectl-1.14.8
systemctl enable kubelet
 
#查看kubelet版本
kubelet --version
 
#查看kubeadm版本
kubeadm version
 
#重新加载配置文件
systemctl daemon-reload

#启动kubelet
systemctl start kubelet

#查看kubelet启动状态
systemctl status kubelet
#没启动成功,报错先不管,后面的kubeadm init会拉起

#设置开机自启动
systemctl enable kubelet

#查看kubelet开机启动状态 enabled:开启, disabled:关闭
systemctl is-enabled kubelet

#查看日志
journalctl -xefu kubelet

可提前下载k8s所需要的容器(可选-master节点)

# 查看所需要的容器
kubeadm config images list

# 国外环境:下载所需要的容器 # kubeadm config images pull  (国内不好使)执行如下阿里云镜像
 
kubeadm config 
images=(
    kube-apiserver:v1.14.8
    kube-controller-manager:v1.14.8
    kube-scheduler:v1.14.8
    kube-proxy:v1.14.8
    pause:3.1
    etcd:3.3.10
    coredns:1.3.1
)
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
done


查看镜像

docker images 

初始化master节点

 --apiserver-advertise-address=192.168.1.53  为本机内网ip
kubeadm init \
  --apiserver-advertise-address=192.168.1.53 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.14.8 \
  --service-cidr=10.96.0.0/16 \
  --pod-network-cidr=10.244.0.0/16
 

执行之后会出现如下信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
Installing Addons | Kubernetes

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.30.0.7:6443 --token tkilag.kmuow6uqn6dw8j8g \
--discovery-token-ca-cert-hash sha256:46a25d6034c114b078efbe3ed0d8b5402a4842afab7d5b72707468a5a7b088b9

执行成功会输出下面的数据,将下面的数据拷贝到从节点执行(每次都不一样根据自己实际生成的为准,这个是node节点加入集群使用)

kubeadm join 192.168.1.53:6443 --token k0pv17.nfzxuv2mex36szfr \
    --discovery-token-ca-cert-hash sha256:4828f606dc29560b70d119c5eb5ff79e304f045e3061c9f5a06e5dba7f0934af 

忘记则查看:

kubeadm token create --print-join-command


报错:

K8s 环境搭建 aarch64 (arm结构)_第1张图片

需要开启:

sysctl -w net.ipv4.ip_forward=1

再次执行

K8s 环境搭建 aarch64 (arm结构)_第2张图片

在master节点执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


在master节点执行

kubectl get nodes
#发现状态是NotReady,是因为没有安装网络插件
#查看kubelet的日志
journalctl -xef -u kubelet -n 20

修改node 标签

kubectl label node k8s-master nodename=master

查看标签:

kubectl get nodes --show-labels

Calico网络插件安装(以下步骤只在master执行)

 Calico

wget  https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
wget  https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
 
cat calico.yaml |grep image
 
      - image: calico/typha:v3.3.7
          image: calico/node:v3.3.7
          image: calico/cni:v3.3.7

#下载镜像
docker pull calico/typha:v3.3.7
docker pull calico/node:v3.3.7
docker pull calico/cni:v3.3.7
 

vim calico.yaml 
 
#修改calico.yaml中的CALICO_IPV4POOL_CIDR 下的value值, Calico IPAM的IP地址池,Pod的IP地址将从该池中进行分配。
#定义Pod网络(CALICO_IPV4POOL_CIDR),与前面pod CIDR配置一样
#选择工作模式(CALICO_IPV4POOL_IPIP),支持BGP(Never)、IPIP(Always)、CrossSubnet(开启BGP并支持跨子网)

- name: CALICO_IPV4POOL_CIDR
    value: "10.244.0.0/16"

K8s 环境搭建 aarch64 (arm结构)_第3张图片

安装calico

kubectl apply -f rbac-kdd.yaml 
kubectl apply -f calico.yaml

K8s 环境搭建 aarch64 (arm结构)_第4张图片

安装ingress-nginx(可选)

# 选择边缘节点(对外暴露网络的节点)打标签,指定ingress-controller部署在边缘节点节点上。
kubectl label nodes k8s-master edgenode=true

#svn地址:http://192.168.178.11/iosdb/ccyf/alarm/k8s/ingress-deploy.yaml
kubectl apply -f ingress-deploy.yaml

# 检查服务
kubectl get pod -o wide -n ingress-nginx


 

UI 部署 Releases · kubernetes/dashboard · GitHub

创建证书

mkdir dashboard-certs
cd dashboard-certs/
#创建命名空间
kubectl create namespace kubernetes-dashboard
# 创建key文件
openssl genrsa -out dashboard.key 2048
#证书请求,CN=可修改为实际IP或者域名
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=kubernetes-dashboard-certs'
#自签证书
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#创建kubernetes-dashboard-certs对象
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
cd ../
#下载dashboard.yaml文件 
#svn地址:http://192.168.178.11/iosdb/ccyf/alarm/k8s/dashboard.yaml
kubectl apply -f dashboard.yaml
kubectl get pods -n kubernetes-dashboard

访问:https://119.3.176.172:30001/#/login

  • 创建service account并绑定默认cluster-admin管理员集群角色
  • 使用输出的token登录Dashboard
  • 部分浏览器可能无法访问,经测试firefox可以
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
# 获取登录token
kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')

kubernetes-dashboard卸载

kubectl delete deployment kubernetes-dashboard --namespace=kubernetes-dashboard
kubectl delete service kubernetes-dashboard  --namespace=kubernetes-dashboard
kubectl delete role kubernetes-dashboard-minimal --namespace=kubernetes-dashboard
kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kubernetes-dashboard
kubectl delete sa kubernetes-dashboard --namespace=kubernetes-dashboard
kubectl delete secret kubernetes-dashboard-certs --namespace=kubernetes-dashboard
kubectl delete secret kubernetes-dashboard-csrf --namespace=kubernetes-dashboard
kubectl delete secret kubernetes-dashboard-key-holder --namespace=kubernetes-dashboard

kubernetes卸载

rm -rf /etc/kubernetes/*
rm -rf ~/.kube/*
rm -rf /var/lib/etcd/*
lsof -i :6443|grep -v "PID"|awk '{print "kill -9",$2}'|sh
lsof -i :10250|grep -v "PID"|awk '{print "kill -9",$2}'|sh
lsof -i :10257|grep -v "PID"|awk '{print "kill -9",$2}'|sh
lsof -i :10259|grep -v "PID"|awk '{print "kill -9",$2}'|sh
lsof -i :2379|grep -v "PID"|awk '{print "kill -9",$2}'|sh
lsof -i :2380|grep -v "PID"|awk '{print "kill -9",$2}'|sh

rm -rf /var/lib/kubelet/
rm -rf /var/lib/dockershim/
rm -rf /var/run/kubernetes
rm -rf /var/lib/cni
rm -rf /etc/kubernetes/*

你可能感兴趣的:(Docker,k8s)