kubernetes集群环境搭建

k8s环境搭建

1.1 版本统一

Docker       18.09.0
---
kubeadm-1.14.0-0 
kubelet-1.14.0-0 
kubectl-1.14.0-0
---
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
---
v0.11.0-amd64

1.2 k8s安装步骤

1.2.1 更新并安装依赖

yum -y update
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

1.2.2 安装Docker

安装好Docker,版本为18.09.0

01 安装必要的依赖
    sudo yum install -y yum-utils \
    device-mapper-persistent-data \
    lvm2

02 设置docker仓库
    sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

【设置要设置一下阿里云镜像加速器】
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["http://2595fda0.m.daocloud.io"]
}
EOF
sudo systemctl daemon-reload

03 安装docker

  yum install -y docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io

04 启动docker
    sudo systemctl start docker && sudo systemctl enable docker

1.2.3 修改hosts文件

(1)master

# 设置master的hostname,并且修改hosts文件
sudo hostnamectl set-hostname m

vi /etc/hosts
192.168.1.157 master
192.168.1.158 node1
192.168.1.159 node2

(2)分别在两个node节点执行

# 设置node1/node2的hostname,并且修改hosts文件
sudo hostnamectl set-hostname node1
sudo hostnamectl set-hostname node2

vi /etc/hosts
192.168.1.157 master
192.168.1.158 node1
192.168.1.159 node2

(3)使用ping测试一下

1.2.4 系统基础前提配置

# (1)关闭防火墙
systemctl stop firewalld && systemctl disable firewalld

# (2)关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# (3)关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab

# (4)配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

# (5)设置系统参数
cat <  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

1.2.5 安装 kubeadm, kubelet and kubectl

(1)配置yum源

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

(2)安装kubeadm&kubelet&kubectl

yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0

(3)docker和k8s设置同一个cgroup

# docker
vi /etc/docker/daemon.json
    "exec-opts": ["native.cgroupdriver=systemd"],

systemctl restart docker

# kubelet,这边如果发现输出directory not exist,也说明是没问题的,大家继续往下进行即可
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g"/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl enable kubelet && systemctl start kubelet

1.2.6 下载国内镜像

  • 查看kubeadm使用的镜像
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
  • 创建kubeadm.sh脚本,用于拉取镜像/打tag/删除原有镜像
#!/bin/bash

set -e

KUBE_VERSION=v1.14.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done
  • 运行脚本和查看镜像
# 运行脚本
sh ./kubeadm.sh

# 查看镜像
docker images

1.2.7 kube init初始化master

(1) 初始化master节点

官网:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

注意此操作是在主节点上进行

# 本地有镜像
kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=192.168.1.157 --pod-network-cidr=10.244.0.0/16
【若要重新初始化集群状态:kubeadm reset,然后再进行上述操作】
init] Using Kubernetes version: v1.14.0

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.157:6443 --token 40kelq.09xe7ldc86xp6sqm \
    --discovery-token-ca-cert-hash sha256:b958ca9b91fba40dfd1246132d741c9d8a8ca8f63f4826457c9bbfde413733d9 

kubeadm join 192.168.100.201:6443 --token dtki8i.g6v5omd7uu7gbijr \
    --discovery-token-ca-cert-hash sha256:439a6015bfeea33e100c8e633d4e0ef0e9a69aa19bd5db4ce7f5b40c61164a07    

    //生成不过期token
    kubeadm token create --ttl 0 --print-join-command

记得保存好最后kubeadm join的信息

(3)根据日志提示

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时kubectl cluster-info查看一下是否成功

(4)查看pod验证一下

等待一会儿,同时可以发现像etc,controller,scheduler等组件都以pod的方式安装成功了

注意:coredns没有启动,需要安装网络插件

kubectl get pods -n kube-system

(5)健康检查

curl -k https://localhost:6443/healthz

1.2.8 安装网络插件flannel

执行完上述操作是coredns 仍未启动成功,需要安装网络插件

docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
docker rmi quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64

kube-flannel.yml

https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml

注意:文件中 Network 和 kubeadmin init 设置的pod-network-cidr=10.244.0.0/16 一致

net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

然后查看

kubectl get pods -n kube-system

发现coredns仍然没有启动,是因为master 默认不支持调度,执行下面操作即可:

kubectl taint nodes --all node-role.kubernetes.io/master-

本文由博客群发一文多发等运营工具平台 OpenWrite 发布

你可能感兴趣的:(kubernetes集群环境搭建)