主机名 |
ip |
备注 |
k8s_master |
192.168.234.130 |
Master&etcd |
k8s_node1 |
192.168.234.131 |
Node1 |
k8s_node2 |
192.168.234.132 |
Node2 |
Kubernetes 是goole开源的大规模容器集群管理系统,使用centos7自带的Kubernetes组件、分布式键值存储系统etcd以及flannel实现docker容器中跨容器访问。
(集群环境需要ntp时钟一致,因为是阿里云的机器,系统默认有时钟核对)
第一步组件安装
Master节点:
systemctl stop firewalld && sudo systemctl disable firewalld
yum install -y kubernetes etcd docker flannel registry
Node节点:
systemctl stop firewalld && sudo systemctl disable firewalld
yum install -y kubernetes docker etcd flannel
第二步配置
节点 |
运行服务 |
Master |
etcd kube-apiserver kube-controller-manager kube-scheduler docker flanneld registry |
node |
etcd flanneld docker kube-proxy kubelet |
0 准备工作aster:
master和slave节点配置hosts,并关闭防火墙
hostnamectl set-hostname k8s_master
vi /etc/hosts
192.168.234.130 registry
192.168.234.130 etcd
192.168.234.130 k8s_master
192.168.234.131 k8s_node1
192.168.234.132 k8s_node2
1. master配置
1.1 配置docker
vim /etc/sysconfig/docker
添加此OPTIONS='--insecure-registry registry:5000',允许从registry中拉取镜像
1.2 etcd配置
vi /etc/etcd/etcd.conf
#[member]
ETCD_NAME="master"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379, ,http://0.0.0.0:4001" #
#[Clustering]
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
测试:
systemctl start etcd && systemctl enable etcd
etcdctl set testdir/testkey 0
etcdctl get testdir/testkey 会得到0
etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
1.3 apiserver 配置
vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
1.4 config配置
vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://k8s_master:8080"
启动服务
systemctl enable kube-apiserver && systemctl start kube-apiserver
systemctl enable kube-controller-manager && systemctl start kube-controller-manager
systemctl enable kube-scheduler && systemctl start kube-scheduler
2. slave配置
2.1 配置docker
vim /etc/sysconfig/docker
添加此OPTIONS='--insecure-registry registry:5000',允许从registry中拉取镜像
2.2 etcd配置
vi /etc/etcd/etcd.conf
#[member]
ETCD_NAME="master"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379, http://0.0.0.0:4001"
#[Clustering]
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
2.3 Kubelet配置
vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=k8s_node1" (相应节点IP=host name)
KUBELET_API_SERVER="--api-servers=http://k8s_master:8080" (master节点IP=>host name)
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=" "
2.4 config配置
vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://k8s_master:8080"
启动服务
systemctl enable kubelet && systemctl start kubelet
systemctl enable kube-proxy && systemctl start kube-proxy
3. 查看集群状态,所有节点运行
kubectl -s http://k8s_master:8080 get node
NAME STATUS AGE
k8s_node1 Ready 1m
k8s_node2 Ready 1m
说明集群搭建完毕
4 创建网络,flannel 配置
master和slave节点更改配置
vi /etc/sysconfig/flanneld
FLANNEL_ETCD="http://etcd:2379"
FLANNEL_ETCD_KEY="/k8s/network"
配置etcd中flanneld对应的key
etcdctl set修改get查询。不管是修改还是创建的时候,必须与FLANNEL_ETCD_KEY一致。
添加网络:
systemctl enable etcd.service
systemctl start etcd.service
etcdctl mk /k8s/network/config '{"Network":"10.254.0.0/16"}' 创建 [网段需要与apiserver 一致]
5. 网络启动后,master、slave重启所有服务
master:
systemctl enable flanneld && systemctl start flanneld
systemctl restart docker
systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler
node:
systemctl enable flanneld && systemctl start flanneld
systemctl restart docker
systemctl restart kubelet
systemctl restart kube-proxy
查看所有NODE是否正常
kubectl -s k8s_master:8080 get nodes
kubectl get nodes
访问http://kube-apiserver:port
http://192.168.234.130:8080/ 查看所有请求url
http://192.168.234.130:8080/healthz/ping 查看健康状况
wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
kubectl delete -f kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
kubectl get namespace