IP | 角色 |
数量
|
---|---|---|
192.168.0.30/10.10.0.11
|
kubernetes master、etcd |
1
|
192.168.0.31/10.10.0.10
|
kubernetes node、etcd |
1
|
192.168.0.32/10.10.0.12
|
kubernetes node、etcd | 1 |
yum install kubernetes-master etcd -y
yum install kubernetes-node etcd -y
# [member] ETCD_NAME=192.168.0.30 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="http://10.10.0.11:2380" ETCD_LISTEN_CLIENT_URLS="http://10.10.0.11:4001,http://127.0.0.1:4001" # [cluster] # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_ADVERTISE_PEER_URLS=“http://10.10.0.11:2380" ETCD_INITIAL_CLUSTER="192.168.0.30=http://10.10.0.11:2380,192.168.0.31=http://10.10.0.10:2380,192.168.0.32=http://10.10.0.12:2380" ETCD_ADVERTISE_CLIENT_URLS="http://10.10.0.11:4001"
针对几个URLS做下简单的解释:
# [member] ETCD_NAME=192.168.0.31 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="http://10.10.0.10:2380" ETCD_LISTEN_CLIENT_URLS="http://10.10.0.10:4001,http://127.0.0.1:4001" # [cluster] # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_ADVERTISE_PEER_URLS=“http://10.10.0.10:2380" ETCD_INITIAL_CLUSTER="192.168.0.30=http://10.10.0.11:2380,192.168.0.31=http://10.10.0.10:2380,192.168.0.32=http://10.10.0.12:2380" ETCD_ADVERTISE_CLIENT_URLS="http://10.10.0.11:4001"
# [member] ETCD_NAME=192.168.0.32 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="http://10.10.0.12:2380" ETCD_LISTEN_CLIENT_URLS="http://10.10.0.12:4001,http://127.0.0.1:4001" # [cluster] # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_ADVERTISE_PEER_URLS=“http://10.10.0.12:2380" ETCD_INITIAL_CLUSTER="192.168.0.30=http://10.10.0.11:2380,192.168.0.31=http://10.10.0.10:2380,192.168.0.32=http://10.10.0.12:2380" ETCD_ADVERTISE_CLIENT_URLS="http://10.10.0.12:4001"
/etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port minions listen on KUBELET_PORT="--kubelet_port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd_servers=http://10.10.0.11:4001,http://10.10.0.10:4001,http://10.10.0.12:4001" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.42.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS=""
/etc/kubernetes/controller-manager
KUBELET_ADDRESS="--machines=10.10.0.11,10.10.0.12" # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/var/run/kubernetes/apiserver.key --root-ca-file=/var/run/kubernetes/apiserver.crt"
mkdir -p /var/run/kubernetes chown -R kube.kube /var/run/kubernetes
systemctl start kube-apiserver systemctl start kube-controller-manager systemctl start kube-scheduler systemctl enable kube-apiserver systemctl enable kube-controller-manager systemctl enable kube-scheduler
分别在节点192.168.0.31 , 192.168.0.32上按照以下进行配置。现仅列出一个节点的具体配置。
/etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname_override=192.168.0.31" # location of the api-server KUBELET_API_SERVER="--api_servers=http://10.10.0.11:8080" # Add your own! KUBELET_ARGS="--pod-infra-container-image=docker.io/kubernetes/pause:latest"
/etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow_privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://10.10.0.11:8080"
/etc/sysconfig/docker
OPTIONS='--selinux-enabled=false --log-level=warning'
systemctl start kubelet systemctl start kube-proxy systemctl enable kubelet systemctl enable kube-proxy
# kubectl get node NAME LABELS STATUS 192.168.0.31 kubernetes.io/hostname=192.168.0.31 Ready 192.168.0.32 kubernetes.io/hostname=192.168.0.32 Ready
# kubectl run my-nginx --image=nginx --replicas=1 --port=80 CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS my-nginx my-nginx nginx run=my-nginx 1
这里网络部分是以插件的形式配置在kubernetes集群中,这里选用flannel。
在node 192.168.0.31、192.168.0.32安装flannel
# yum install flannel -y
# cat /etc/sysconfig/flannel FLANNEL_ETCD="http://10.10.2.120:4001" FLANNEL_ETCD_KEY="/coreos.com/network"
# etcdctl mk /coreos.com/network/config '{"Network": "172.16.0.0/16"}'
重置docker0网桥的配置
删除docker启动时默认创建的docker0网桥,flannel启动时会获取到一个网络地址,并且配置docker0的IP地址,作为该网络的网关地址,如果此时docker0上配置有IP地址,那么flannel将会启动失败。
# ip link del docker0
这 里需要开启iptables,flanneld使container可以跨节点通信使用的是iptables的NAT功能,所以需要开启 iptables,我这里个人比较喜欢使用iptables,对于新的管理服务firewalld,我选择使用iptables来替代。
# yum install iptables-services -y # systemctl disable firewalld # systemctl stop firewalld # iptables -F # iptables-save # systemctl enable iptables # systemctl start iptables
# systemctl enable flanneld # systemctl start flanneld
查看原文:http://www.zoues.com/index.php/2016/02/27/dcos-k8s1/