本地测试可以用yum安装
master(192.168.33.13):启动etcd,kube-apiserver,kube-scheduler,kube-controller-manage
node(192.168.33.14):启动etcd,kube-proxy,kubelet
master配置文件:
/etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=192.168.33.13"
# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.33.13:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.1.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit"
# Add your own!
KUBE_API_ARGS=""
/etc/kubernetes/scheduler和/etc/kubernetes/controller-manager可以不用配置
/etc/etcd/etcd.conf:注意ETCD_LISTEN_CLIENT_URLS需要配置两个URL,127.0.0.1不能少,其它的URL配置项全部改为本地ip.
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://192.168.33.13:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.33.13:2379,http://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="default"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.33.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.33.13:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
ETCD_INITIAL_CLUSTER="default=http://192.168.33.13:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
/etc/sysconfig/docker:注意需要禁用selinux
OPTIONS="--selinux-enabled=false --log-driver=journald --signature-verification=false --registry-mirror=https://olzwzeg2.mirror.aliyuncs.com --insecure-registry gcr.io"
/etc/etcd/etcd.conf配置直接复制master文件,不需要改动,因为使用master上的etcd服务,不需要在node上再启动另外的etcd。
/etc/kubernetes/kubelet:注意KUBELET_POD_INFRA_CONTAINER和KUBELET_ARGS配置
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
#KUBELET_ARGS=""
KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause"
/etc/kubernetes/proxy不需要配置,
flanneld文档地址:https://github.com/coreos/flannel/blob/master/Documentation/running.md
flanneld配置在master和node上都是一致的。
master:
systemd配置flanneld服务:创建文件/usr/lib/systemd/system/flanneld.service,
注意在vagrant环境下,需要指定iface为哪个网卡,在flannel文档中有说明
[Unit]
Description=flanneld overlay address etcd agent
After=network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
ExecStart=/usr/bin/flanneld --iface enp0s8 --etcd-endpoints=${FLANNEL_ETCD} $FLANNEL_OPTIONS
[Install]
RequiredBy=docker.service
WantedBy=multi-user.target
创建flanneld配置文件:/etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.33.13:2379"
FLANNEL_ETCD_KEY="/coreos.com/network"
~
配置docker服务使用flanneld提供CIDR IP:/usr/lib/systemd/system/docker.service
注意增加了run/flannel/subnet.env和bip选项。
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target rhel-push-plugin.socket registries.service
Wants=docker-storage-setup.service
Requires=docker-cleanup.timer
[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/run/containers/registries.conf
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
EnvironmentFile=/run/flannel/subnet.env
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--exec-opt native.cgroupdriver=systemd \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
--init-path=/usr/libexec/docker/docker-init-current \
--seccomp-profile=/etc/docker/seccomp.json \
--bip=${FLANNEL_SUBNET} \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY \
$REGISTRIES
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
KillMode=process
[Install]
WantedBy=multi-user.target
执行命令:
etcdctl set /coreos.com/network/config '{ "Network":"10.1.0.0/16" }'
由于flannel会覆盖docker0网桥,如果已启动docker,则先stop
启动flanneld
systemctl start flanneld
./mk-docker-opts.sh -i 这个脚本下载flannel时会有
source /run/flannel/subnet.env
ifconfig docker0 ${FLANNEL_SUBNET}
systemctl restart docker
如果无意外可以看到docker0上的ip是flannel0 ip子域中的:ip addr可查看
同样的方式在node同样的方式配置flanneld.成功后,ping 其它节点的docker0 ip可以成功
1.查看服务启动日志:journalctl -xeu kube-apiserver
2.kube-apiserver服务启动失败,查看etcd服务是否已启动,若没启动,则启动。
3.flanneld需要使用etcd作为datastore,所以需要先启动etcd,后启动flanneld.
4.flanneld覆盖docker网络配置,先stop docker,再start flanneld,最后start docker,让docker能使用上flanneld的配置
5.etcdctl cluster-health ,etcdctl执行命令时报错:Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:2379: getsockopt: connection refused--bip=${FLANNEL_SUBNET} \
7.image pull failed,认证失败
因为kubernetes需要证书,
yum search rhsm
查找出来,安装相应的证书。但是发现在centos7上,安装以后,/etc/rhsm/ca/依然是空目录,并没有文件.
docker pull docker.io/kubernetes/pause
然后kubelet配置文件如上所示。