节点配置
主机名 IP 配置
master1 192.168.1.181 1C 4G
node1 192.168.1.182 2C 6G
准备不低于2台虚机。 1台 master,其余的做node ;OS: Centos7.3 mini install。 最小化安装。配置节点IP
主机名 IP 配置
master1 192.168.1.181 1C 4G
node1 192.168.1.182 2C 6G
master1 node1 设置主机名和时区:
timedatectl set-timezone Asia/Shanghai #都要执行
hostnamectl set-hostname master1 #master1执行
hostnamectl set-hostname node1 #node1执行
master1 node1 设置host解析
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.181 matser1
192.168.1.182 node1
master1 node1 关闭seliux以及firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewalld
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
安装docker
在线安装
yum install -y docker Version: 1.13.1
修改docker 镜像仓库,并测试
vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
systemctl stop docker
systemctl start docker
测试是否成功
docker pull hello-world
搭建自己的镜像仓库 https://www.jianshu.com/p/fef890c4d1c2
K8S : 安装 kubeadm,kubectl,kubelet
kubeadm是集群部署工具
kubectl是集群管理工具,通过command来管理集群
kubelet的k8s集群每个节点的docker管理服务
每个节点都要安装
yum install kubelet-1.12.2-0 kubeadm-1.12.2-0 kubectl-1.12.2-0
可能遇到问题:没有可用软件包 kubeadm-1.12.2-0。没有可用软件包 kubelet-1.12.2-0。没有可用软件包 kubectl-1.12.2-0
解决:# 配置源
$ cat <
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
配置kubelet
DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)
echo $DOCKER_CGROUPS
输入docker info,==记录Cgroup Driver==
Cgroup Driver: cgroupfs
docker和kubelet的cgroup driver需要一致,如果docker不是cgroupfs,则修改
/usr/lib/systemd/system/docker.service中的以下设置为:
--exec-opt native.cgroupdriver=cgroupfs \
systemctl daemon-reload
systemctl restart docker
在所有kubernetes节点上设置kubelet使用cgroupfs,与dockerd保持一致,否则kubelet会启动报错
默认kubelet使用的cgroup-driver=systemd,改为cgroup-driver=cgroupfs 【先查看kubeadm.conf配置文件中的cgroyp-driver,一致则不需要修改】
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
重设kubelet服务,并重启kubelet服务
systemctl daemon-reload && systemctl restart kubelet
关闭swap,及修改iptable
swapoff -a
vi /etc/fstab #swap一行注释
cat <
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
导入k8s所需镜像
master1 node1 执行
docker load -i k8s.gcr.io.basic.tar.gz
一共11个镜像,分别是
quay.io/coreos/flannel v0.11.0-amd64
k8s.gcr.io/kube-proxy v1.12.2
k8s.gcr.io/kube-apiserver v1.12.2
k8s.gcr.io/kube-controller-manager v1.12.2
k8s.gcr.io/kube-scheduler v1.12.2
k8s.gcr.io/etcd 3.2.24
k8s.gcr.io/coredns 1.2.2
k8s.gcr.io/kubernetes-dashboard-amd64 v1.8.3
quay.io/coreos/flannel v0.10.0-amd64
k8s.gcr.io/pause 3.1
1.12.2版本的镜像包:
链接:https://pan.baidu.com/s/173ii8FegfRN_VUt-f8tZQw 密码:h027
kubeadm init 部署master节点
只在master1执行。此处选用最简单快捷的部署方案。etcd、api、controller-manager、 scheduler服务都会以容器的方式运行在master。
etcd 为单点,不带证书。etcd的数据会挂载到master节点/var/lib/etcd
执行命令
1.12.2 版本
master#kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16
kubeadm init --kubernetes-version=v1.10.1 --pod-network-cidr=10.244.0.0/16
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
记住join命令,以后在需要加入的node执行
kubeadm join 192.168.1.181:6443 --token wct45y.tq23fogetd7rp3ck --discovery-token-ca-cert-hash sha256:c267e2423dba21fdf6fc9c07e3b3fa17884c4f24f0c03f2283a230c70b07772f
记下join的命令,后续node节点加入的时候要用到
执行提示的命令,在master1 和 需要加入的node执行
执行提示的命令,保存kubeconfig【下面这段代码在node节点也需要,提前把/etc/kubernetes/admin.conf scp到远程node节点】
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
master查看节点状态
此时执行kubectl get node 已经可以看到master节点,notready是因为还未部署网络插件
[root@master1 kubernetes1.10]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady master 3m v1.10.1
查看所有的pod,kubectl get pod --all-namespaces
kubedns也依赖于容器网络,此时pending是正常的
[root@master1 kubernetes1.10]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-master1 1/1 Running 0 3m
kube-system kube-apiserver-master1 1/1 Running 0 3m
kube-system kube-controller-manager-master1 1/1 Running 0 3m
kube-system kube-dns-86f4d74b45-5nrb5 0/3 Pending 0 4m
kube-system kube-proxy-ktxmb 1/1 Running 0 4m
kube-system kube-scheduler-master1 1/1 Running 0 3m
配置KUBECONFIG变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
echo $KUBECONFIG #应该返回/etc/kubernetes/admin.conf
部署flannel网络
kubectl apply -fhttps://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
也可以下载下来:
kubectl apply -f kube-flannel.yml
可能遇到的问题:
如果dns起不来
[root@markpain ~]# cat /etc/resolv.conf //查看该文件中的内容
# 增加
nameserver 8.8.8.8 //google服务器
nameserver 8.8.4.4 //google备用服务器
[ERROR Port-10250]: Port 10250 is in use
#kubeadm reset
error: at least one taint update is required
#kubectl taint nodes --all node-role.kubernetes.io/master=:NoSchedule --overwrite
node节点加入集群
执行join
使用之前kubeadm init 生产的join命令,加入成功后,回到master节点查看是否成功
kubeadm join 192.168.1.181:6443 --token wct45y.tq23fogetd7rp3ck --discovery-token-ca-cert-hash sha256:c267e2423dba21fdf6fc9c07e3b3fa17884c4f24f0c03f2283a230c70b07772f
[root@master1 kubernetes1.10]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready master 31m v1.10.1
node1 Ready
至此,集群已经部署完成。
x509错误
如果有报错才需要做这一步,不然不需要。
这是因为master节点缺少KUBECONFIG变量
[discovery] Failed to request cluster info, will try again: [Get https://192.168.1.181:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
master节点执行
export KUBECONFIG=$HOME/.kube/config
node节点kubeadm reset 再join
kubeadm reset
kubeadm join xxx ...
可能遇到的问题:
1. [preflight] Some fatal errors occurred:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
解决方案:
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
忘了join命令,加入节点方法
若node已经成功加入,忽略这一步。
使用场景:忘了保存上面kubeadm init生产的join命令,可按照下面的方法加入node节点。
首先master节点获取token,如果token list内容为空,则kubeadm token create创建一个,记录下token数据
[root@master1 kubernetes1.10]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
wct45y.tq23fogetd7rp3ck 22h 2018-04-26T21:38:57+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
node节点执行如下,把token部分进行替换
kubeadm join --token wct45y.tq23fogetd7rp3ck 192.168.1.181:6443 --discovery-token-unsafe-skip-ca-verification
部署k8s ui界面,dashboard
dashboard是官方的k8s 管理界面,可以查看应用信息及发布应用。dashboard的语言是根据浏览器的语言自己识别的
官方默认的dashboard为https方式,如果用chrome访问会拒绝。本次部署做了修改,方便使用,使用了http方式,用chrome访问正常。
一共需要导入3个yaml文件
kubectl apply -f kubernetes-dashboard-http.yaml
kubectl apply -f admin-role.yaml
kubectl apply -f kubernetes-dashboard-admin.rbac.yaml
[root@master1 kubernetes1.10]# kubectl apply -f kubernetes-dashboard-http.yaml
serviceaccount "kubernetes-dashboard" created
role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
deployment.apps "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[root@master1 kubernetes1.10]# kubectl apply -f admin-role.yaml
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
[root@master1 kubernetes1.10]# kubectl apply -f kubernetes-dashboard-admin.rbac.yaml
clusterrolebinding.rbac.authorization.k8s.io "dashboard-admin" created
创建完成后,通过 http://任意节点的IP:31000即可访问ui
可能遇到问题:
dial tcp 10.96.0.1:443: getsockopt: no route to host 在使用 Minikube 部署 kubernetes 服务时,出现 Kube DNS 服务反复重启现象(错误如上),这很可能是 iptables 规则乱了,通过执行以下命令解决
# systemctl stop kubelet && systemctl stop docker && iptables --flush && iptables -tnat --flush && systemctl start kubelet && systemctl start docker