阿里云环境部署k8s集群

    网上讲述如何部署k8s集群的文章很多,特别是k8s中文社区里面,每个平台的部署方式都有详细说明。但就是看了中文社区的指导,发现一路是坑,第一个源访问的时候就404 NotFound, 更别说那生涩的翻译和版本匹配问题。

     如此一来还不如自己写一个。

 

  1. 开虚机

      系统环境CentOS 7.2 . 这方面不再赘述

 

  2. 设置/etc/hosts 

      简单来说就是把master和minion主机都用域名在hosts文件中记录一下。

 

  3. 集群主机都安装kubernetes和etcd

       直接yum install . 安装kubernetes 时会顺带把docker等都安装上。 此文章编写时,阿里云的kubernetes版本为1.5.2.

      这里还需要注意:应该在etcdctl中配置flannel,否则flannel无法正常启动。

      

[root@k8s-master home]# etcdctl set /flannel/network/config '{ "Network": "172.16.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }'

 

 

  4. 配置apiserver

      按照中文社区的指导配置(但最终并没有使用该配置,看下去就知道)

      修改/etc/kubernetes/apiserver

      修改/etc/kubernetes/config

     由于这条路不通,具体修改内容就不贴了。

 

  5. master配置启动脚本

#/bin/bash
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done

   走到这里,按照中文社区的指导,应该就能直接起来了。但其实并不能!!!

   收到以下错误:

   

Sep 29 17:06:15 debug010000002015 kube-apiserver: W0929 17:06:15.881473   21259 handlers.go:50] Authentication is disabled
Sep 29 17:06:15 debug010000002015 kube-apiserver: [restful] 2018/09/29 17:06:15 log.go:30: [restful/swagger] listing is available at https://172.16.7.93:6443/swaggerapi/
Sep 29 17:06:15 debug010000002015 kube-apiserver: [restful] 2018/09/29 17:06:15 log.go:30: [restful/swagger] https://172.16.7.93:6443/swaggerui/ is mapped to folder /swagger-ui/
Sep 29 17:06:15 debug010000002015 kube-apiserver: E0929 17:06:15.984071   21259 reflector.go:199] k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get http://127.0.0.1:18080/api/v1/resourcequotas?resourceVersion=0: dial tcp 127.0.0.1:18080: getsockopt: connection refused
Sep 29 17:06:15 debug010000002015 kube-apiserver: E0929 17:06:15.984217   21259 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.Namespace: Get http://127.0.0.1:18080/api/v1/namespaces?resourceVersion=0: dial tcp 127.0.0.1:18080: getsockopt: connection refused
Sep 29 17:06:15 debug010000002015 kube-apiserver: E0929 17:06:15.987986   21259 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.LimitRange: Get http://127.0.0.1:18080/api/v1/limitranges?resourceVersion=0: dial tcp 127.0.0.1:18080: getsockopt: connection refused
Sep 29 17:06:16 debug010000002015 kube-apiserver: F0929 17:06:16.058072   21259 genericapiserver.go:189] unable to load server certificate: open /var/run/kubernetes/apiserver.key: permission denied
Sep 29 17:06:16 debug010000002015 systemd: kube-apiserver.service: main process exited, code=exited, status=255/n/a
Sep 29 17:06:16 debug010000002015 systemd: Failed to start Kubernetes API Server.
Sep 29 17:06:16 debug010000002015 systemd: Unit kube-apiserver.service entered failed state.
Sep 29 17:06:16 debug010000002015 systemd: kube-apiserver.service failed.
Sep 29 17:06:16 debug010000002015 systemd: kube-apiserver.service holdoff time over, scheduling restart.

 查询了Google和Baidu都无果。

 但是测试发现直接用命令行启动kube-api 是成功的。因此只好采取直接修改systemctl service文件的做法。

 修改kube-apiserver.service的启动脚本, 路径是 /lib/systemd/system/kube-apiserver.service

 内容如下:

[root@k8s-master home]# vi /lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
#ExecStart=/usr/bin/kube-apiserver \
#           $KUBE_LOGTOSTDERR \
#           $KUBE_LOG_LEVEL \
#           $KUBE_ETCD_SERVERS \
#           $KUBE_API_ADDRESS \
#           $KUBE_API_PORT \
#           $KUBELET_PORT \
#           $KUBE_ALLOW_PRIV \
#           $KUBE_SERVICE_ADDRESSES \
#           $KUBE_ADMISSION_CONTROL \
#           $KUBE_API_ARGS

ExecStart=/usr/bin/kube-apiserver --allow_privileged=true --logtostderr=false --v=6 --log-dir=/var/log/k8s/kube-apiserver --insecure-bind-address=0.0.0.0 --insecure-port=8080 --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota,ServiceAccount,AlwaysPullImages,SecurityContextDeny --etcd_servers=http://x.x.x.x:2379 --master-service-namespace=master --secure-port=6443 --bind-address=0.0.0.0 --service-cluster-ip-range=10.0.0.0/16 --max-requests-inflight=1000 --storage-backend=etcd3 --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
            --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
            --client-ca-file=/etc/kubernetes/pki/ca.pem \
            --service-account-key-file=/etc/kubernetes/pki/ca-key.pem
KillMode=control-group
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

 

其中 : --etcd_servers=http://xx.xx.xx.xx:2379   为本机eth0网卡IP,需要替换。

             相关ssl文件需要用openssl自己生成。或者使用insecure模式。

将 etcd的配置文件,路径如下:/etc/etcd/etcd.conf , 其中下述内容由监听本地回环改为监听0.0.0.0

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

修改flannel.service文件

[root@k8s-master home]# vi /lib/systemd/system/flanneld.service 

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld -etcd-endpoints=http://x.x.x.x:2379 -etcd-prefix=/flannel/network -iface=eth0
#ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
#ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
WantedBy=docker.service

其中 : --etcd_servers=http://xx.xx.xx.xx:2379   为本机eth0网卡IP,需要替换。

 

修改kube-controller-manager.service文件

[root@k8s-master home]# vi /lib/systemd/system/kube-controller-manager.service 

Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
#ExecStart=/usr/bin/kube-controller-manager \
#           $KUBE_LOGTOSTDERR \
#           $KUBE_LOG_LEVEL \
#           $KUBE_MASTER \
#           $KUBE_CONTROLLER_MANAGER_ARGS

ExecStart=/usr/bin/kube-controller-manager --logtostderr=false --v=6 --log-dir=/var/log/k8s/kube-controller-manager --namespace-sync-period=5m0s --node-monitor-grace-period=40s --node-monitor-period=5s --node-startup-grace-period=1m0s --node-sync-period=10s --pod-eviction-timeout=5m0s --pvclaimbinder-sync-period=10s --register-retry-count=20    --kubeconfig=/etc/kubernetes/controller-manager.conf \
            --cluster-name=kubernetes \
            --service-cluster-ip-range=10.0.0.0/16 \
            --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
            --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
            --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \
            --root-ca-file=/etc/kubernetes/pki/ca.pem

Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

 

修改kube-scheduler.service文件

[root@k8s-master home]# vi /lib/systemd/system/kube-scheduler.service 

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
#ExecStart=/usr/bin/kube-scheduler \
#           $KUBE_LOGTOSTDERR \
#           $KUBE_LOG_LEVEL \
#           $KUBE_MASTER \
#           $KUBE_SCHEDULER_ARGS

ExecStart=/usr/bin/kube-scheduler --logtostderr=false --v=6 --log-dir=/var/log/k8s/kube-scheduler --algorithm-provider=DefaultProvider --kubeconfig=/etc/kubernetes/scheduler.conf

Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

 

修改kube-proxy.service文件

[root@k8s-master home]# vi /lib/systemd/system/kube-proxy.service 

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
#ExecStart=/usr/bin/kube-proxy \
#           $KUBE_LOGTOSTDERR \
#           $KUBE_LOG_LEVEL \
#           $KUBE_MASTER \
#           $KUBE_PROXY_ARGS

ExecStart=/usr/bin/kube-proxy --master=http://x.x.x.x:8080 --hostname-override=k8s-master --proxy-mode=iptables -v=6 --logtostderr=false --log-dir=/var/log/k8s/kube-proxy
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

其中 xx.xx.xx.xx:8080   为本机eth0网卡IP,需要替换。

修改kubelet.service文件

[root@k8s-master home]# vi /lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
#ExecStart=/usr/bin/kubelet \
#           $KUBE_LOGTOSTDERR \
#           $KUBE_LOG_LEVEL \
#           $KUBELET_API_SERVER \
#           $KUBELET_ADDRESS \
#           $KUBELET_PORT \
#           $KUBELET_HOSTNAME \
#           $KUBE_ALLOW_PRIV \
#           $KUBELET_POD_INFRA_CONTAINER \
#           $KUBELET_ARGS

ExecStart=/usr/bin/kubelet --allow-privileged=true \
        --logtostderr=false \
        --v=6 \
        --log-dir=/var/log/k8s/kubelet \
        --address=x.x.x.x \
        --cluster-dns=10.0.1.10 \
        --hostname-override=k8s-master \
        --cluster-domain=cluster.local \
        --kubeconfig=/etc/kubernetes/kubelet.conf \
        --pod-manifest-path=/etc/kubernetes/manifest \
        --allow-privileged=true \
        --authorization-mode=AlwaysAllow \
        --fail-swap-on=false \
        --cgroup-driver=systemd \
        --pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0

Restart=on-failure

[Install]
WantedBy=multi-user.target

其中 xx.xx.xx.xx   为本机eth0网卡IP,需要替换。

registry.aliyuncs.com/archon/pause-amd64:3.0  来源自https://segmentfault.com/q/1010000008763165/a-1020000008824481

 

  完毕后,再运行启动脚本,此时所有组件都能够正常启动。

 

 

 

 

 

 6.  minion配置文件

     路径为: /etc/kubernetes/kubelet 以及 /etc/kubernetes/config

   config文件内容如下:

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS=”–etcd_servers=http://k8s-master:4001″

# logging to stderr means we get it in the systemd journal

 kubelet文件如下:


###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-slave"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

验证服务状态:

[root@k8s-slave home]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}  

 

  6. minion配置启动脚本

修改kube-proxy.service文件

[root@k8s-slave home]# vi /lib/systemd/system/kube-proxy.service 

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
#ExecStart=/usr/bin/kube-proxy \
#           $KUBE_LOGTOSTDERR \
#           $KUBE_LOG_LEVEL \
#           $KUBE_MASTER \
#           $KUBE_PROXY_ARGS

ExecStart=/usr/bin/kube-proxy --master=http://x.x.x.x:8080 --hostname-override=k8s-slave --proxy-mode=iptables -v=6 --logtostderr=false --log-dir=/var/log/k8s/kube-proxy

Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

其中 xx.xx.xx.xx   为master eth0网卡IP,需要替换。

修改kubelet配置文件

[root@k8s-slave home]# vi /lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
#ExecStart=/usr/bin/kubelet \
#           $KUBE_LOGTOSTDERR \
#           $KUBE_LOG_LEVEL \
#           $KUBELET_API_SERVER \
#           $KUBELET_ADDRESS \
#           $KUBELET_PORT \
#           $KUBELET_HOSTNAME \
#           $KUBE_ALLOW_PRIV \
#           $KUBELET_POD_INFRA_CONTAINER \
#           $KUBELET_ARGS

ExecStart=/usr/bin/kubelet --allow-privileged=true \
        --logtostderr=false \
        --v=6 \
        --log-dir=/var/log/k8s/kubelet \
        --address=0.0.0.0 \
        --cluster-dns=10.0.1.10 \
        --hostname-override=k8s-slave \
        --cluster-domain=cluster.local \
        --kubeconfig=/etc/kubernetes/kubelet.conf \
        --pod-manifest-path=/etc/kubernetes/manifest \
        --allow-privileged=true \
        --authorization-mode=AlwaysAllow \
        --fail-swap-on=false \
        --cgroup-driver=systemd \
        --pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0

Restart=on-failure

[Install]
WantedBy=multi-user.target

在etc/profile文件末尾添加以下内容:

export KUBERNETES_MASTER=http://x.x.x.x:8080

其中 xx.xx.xx.xx   为master eth0网卡IP,需要替换。

 

for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done

   运行启动脚本,此时所有服务可以正常启动。

   验证服务状态:

[root@k8s-slave home]# kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready         3h        v1.9.0
k8s-slave    Ready         2h        v1.9.0

 

   至此集群部署完毕。

你可能感兴趣的:(分布式容器技术)