本文主要用于在内网(离线)环境安装k8s集群;linux环境 centos7.6
主要步骤有:
下载离线安装包https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
docker-ce-cli-18.09.7-3.el7.x86_64.rpm
docker-ce-18.09.7-3.el7.x86_64.rpm
container-selinux-2.107-1.el7_6.noarch.rpm
containerd.io-1.2.2-3.el7.x86_64.rpm
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
1.在有外网环境的docker中下载镜像,并启动;
docker pull registry:2
2.从image导出镜像
docker save -o registry.tar registry:2
3.上传registry.tar到离线服务器,导入
docker load -I registry
4.启动
docker run -d -v /registry:/var/lib/registry -p 5000:5000 --restart=always --name registry registry:2
5.修改k8s集群节点的dokcer daemon.json 支持https
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries": ["10.209.68.12:5000"]
}
systemctl daemon-reload
systemctl restart docker
# 在 master 节点和 worker 节点都要执行
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# 关闭 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
需要设置其他主机名称时,可将 master 替换为正确的主机名node1、node2即可。
cat > /etc/sysctl.d/k8s.conf < net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl –system 在有网络的服务器上下载需要的rpm安装包 # cat < [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF rpm -ivh *.rpm 设置开机启动 systemctl enable kubelet.service # kubeadm config images list k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.7 编写脚本,从阿里云下载镜像 # cat pull-images.sh #!/bin/bash images=( kube-apiserver:v1.18.0 kube-controller-manager:v1.18.0 kube-scheduler:v1.18.0 kube-proxy:v1.18.0 pause:3.2 etcd:3.4.3-0 coredns:1.6.7 ) for imageName in ${images[@]}; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName} docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} done # docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.18.0 43940c34f24f 7 days ago 117MB k8s.gcr.io/kube-apiserver v1.18.0 74060cea7f70 7 days ago 173MB k8s.gcr.io/kube-controller-manager v1.18.0 d3e55153f52f 7 days ago 162MB k8s.gcr.io/kube-scheduler v1.18.0 a31f78c7c8ce 7 days ago 95.3MB k8s.gcr.io/pause 3.2 80d28bedfe5d 6 weeks ago 683kB k8s.gcr.io/coredns 1.6.7 67da37a9a360 2 months ago 43.8MB k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 5 months ago 编写脚本打包镜像 # cat save-images.sh #!/bin/bash images=( kube-apiserver:v1.18.0 kube-controller-manager:v1.18.0 kube-scheduler:v1.18.0 kube-proxy:v1.18.0 pause:3.2 etcd:3.4.3-0 coredns:1.6.7 ) for imageName in ${images[@]}; do docker save -o `echo ${imageName}|awk -F ':' '{print $1}'`.tar k8s.gcr.io/${imageName} done 压缩下载,上传到离线服务器; tar czvf kubeadm-images-1.18.0.tar.gz *.tar 在安装节点分别导入离线镜像或者放入私有仓库使用 # cat load-image.sh #!/bin/bash ls /root/kubeadm-images-1.18.0 > /root/images-list.txt cd /root/kubeadm-images-1.18.0 for i in $(cat /root/images-list.txt) do docker load -i $i done 导入镜像 # ./load-image.sh kubeadm init --apiserver-advertise-address 10.209.69.12 --apiserver-bind-port 6443 --kubernetes-version 1.18.0 --pod-network-cidr 10.244.0.0/16 --service-cidr 10.1.0.0/16 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.209.69.12:6443 --token voj8z6.ytej05mfnul5gci7 \ --discovery-token-ca-cert-hash sha256:d12c6150f5752238e8eabe81403ff4defaf2aeb1a1c159ed7310e027b367b57b 下载https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml文件 把里面依赖的image都在有网络环境中下载下来; 导入私有镜像库; docker tag xxx:vxxx 10.209.69.12:5000/xxx:vxxx docker push 10.209.69.12:5000/xxx:vxxx 修改yml中镜像为私有镜像库中的包 部署 kubectl apply -f flannel.yml 查看节点 是否为ready kubectl get nodes 2,安装kubeadm/kubectl/kubelet
3,初始化master节点
4,加入子节点
安装flannel