一种较为简单的方式
- 由于条件有限,准备了两台虚拟主机Centos7.x(裸机)
–在安装过程中能够体会到2台或者更多台机器的安装过程类似
Docker 18.09.0
---
kubeadm-1.14.0-0
kubelet-1.14.0-0
kubectl-1.14.0-0
---
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
---
calico:v3.9
所有机器都需要执行
yum -y update
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
# 安装 wget 配置阿里镜像源
yum install -y wget
cd /etc/yum.repos.d/
mv CentOS-Base.repo CentOS-Base.repo.back
wget -O CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo
yum clean all
yum makecache
01 安装必要的依赖 sudo yum install -y yum-utils device-mapper-persistent-data lvm2 02 设置docker仓库 sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 3.【设置要设置一下阿里云镜像加速器】这里通过登录阿里云账户获取主机的实际地址 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["这边替换成自己的实际地址"] } EOF 4.重新加载生效 sudo systemctl daemon-reload 5.安装docker yum install -y docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io 04 启动docker 并设置开机启动 sudo systemctl start docker && sudo systemctl enable docker
master节点
# 设置master的hostname,并且修改hosts文件
sudo hostnamectl set-hostname m
vi /etc/hosts
192.168.121.138 m
192.168.121.140 w1
worker节点
# 设置worker01/02的hostname,并且修改hosts文件
sudo hostnamectl set-hostname w1
vi /etc/hosts
192.168.121.138 m
192.168.121.140 w1
ping测试
m: ping w1
w1: ping m
# (1)关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# (2)关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# (3)关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
# (4)配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# (5)设置系统参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
配置yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubeadm&kubelet&kubectl
# 查看排序
yum list kubeadm --showduplicates | sort -r
cri-tools-1.13.0-0.x86_64 来自 kubernetes
kubeadm-1.14.0-0.x86_64 来自 kubernetes
kubectl-1.14.0-0.x86_64 来自 kubernetes
kubelet-1.14.0-0.x86_64 来自 kubernetes
kubernetes-cni-0.8.7-0.x86_64 来自 kubernetes
socat-1.7.3.2-2.el7.x86_64 来自 base
# 安装
yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0 --skip-broken
# 这里安装报错,分别执行如下命令分别安装各个组件
yum install -y cri-tools-1.13.0-0.x86_64
yum install -y kubeadm-1.14.0-0.x86_64
yum install -y kubectl-1.14.0-0.x86_64
yum install -y kubelet-1.14.0-0.x86_64
yum install -y kubernetes-cni-0.8.7-0.x86_64
yum install -y socat-1.7.3.2-2.el7.x86_64
docker和k8s设置同一个cgroup
# docker 在 daemon.json文件中增加如下配置,注意使用 , 隔开
vi /etc/docker/daemon.json
"exec-opts": ["native.cgroupdriver=systemd"],
# 重启docker
systemctl restart docker
# kubelet,这边如果发现输出directory not exist,是没问题的
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# 启动 kubernetes
systemctl enable kubelet && systemctl start kubelet
(1)查看kubeadm使用的镜像
kubeadm config images list
可以发现这里都是国外的镜像
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
(2)解决国外镜像不能访问的问题
#!/bin/bash
set -e
KUBE_VERSION=v1.14.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})
for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
# 运行脚本
sh ./kubeadm.sh
# 查看镜像
docker images
初始化master节点(在主节点上执行)
kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=本地IP --pod-network-cidr=10.244.0.0/16
# 我在安装的过程中报错,版本不对应,重新下载对应版本
yum -y remove kubelet
yum -y install kubelet-1.14.0
yum -y install kubeadm-1.14.0
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.121.138:6443 --token y4yylq.ev18cfq87ic0yqmz \
--discovery-token-ca-cert-hash sha256:af4b166caaaacb231fc33027fdb7d6a6c2bb455dd3a478fe84e191ade0d5819b
---报错
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[解决办法]
ll /etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
kubectl get pods -n kube-system
健康检查
curl -k https://localhost:6443/healthz
输出 ok
网络插件起到协助通信的作用
calico,同样在master节点上操作
# 在k8s中安装calico
kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
# 确认一下calico是否安装成功
kubectl get pods --all-namespaces -w
master节点的最后打印信息,在从节点中执行
kubeadm join 192.168.121.138:6443 --token y4yylq.ev18cfq87ic0yqmz \
--discovery-token-ca-cert-hash sha256:af4b166caaaacb231fc33027fdb7d6a6c2bb455dd3a478fe84e191ade0d5819b
检查集群信息
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-kubeadm-k8s Ready master 19m v1.14.0
worker01-kubeadm-k8s Ready 3m6s v1.14.0
worker02-kubeadm-k8s Ready 2m41s v1.14.0
在加入从节点的时候,报错
not found /etc/kubernetes/admin.conf
然后拷贝主节点的admin.conf文件到从节点中
修改 .bash_profile 并重启