手动搭建Kubernetes1.8高可用集群(5)Node

一、准备

1、etcd集群 

2、Node2,Node3上搭建Node,以下所有操作都在Node3上进行。Node2只需要修改kubelet配置就可以了

3、创建目录,并分发证书

/etc/kubernetes/manifests    属主kube 属组kube-cert  权限0700
/etc/kubernetes/ssl
/etc/nginx

二、安装kubelet

1、复制二进制文件

docker run --rm -v /usr/local/bin:/systembindir quay.io/coreos/hyperkube:v1.8.3_coreos.0 /bin/cp /hyperkube /systembindir/kubelet

三、准备kubelet配置文件

1、/etc/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Wants=docker.socket

[Service]
EnvironmentFile=-/etc/kubernetes/kubelet.env
ExecStart=/usr/local/bin/kubelet \
                $KUBE_LOGTOSTDERR \
                $KUBE_LOG_LEVEL \
                $KUBELET_API_SERVER \
                $KUBELET_ADDRESS \
                $KUBELET_PORT \
                $KUBELET_HOSTNAME \
                $KUBE_ALLOW_PRIV \
                $KUBELET_ARGS \
                $DOCKER_SOCKET \
                $KUBELET_NETWORK_PLUGIN \
                $KUBELET_CLOUDPROVIDER
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

2、/etc/kubernetes/kubelet.env

# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=2"
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.1.126 --node-ip=192.168.1.126"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node3"

KUBELET_ARGS="--pod-manifest-path=/etc/kubernetes/manifests \
--cadvisor-port=0 \
--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \
--node-status-update-frequency=10s \
--docker-disable-shared-pid=True \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--tls-cert-file=/etc/kubernetes/ssl/node-node3.pem \
--tls-private-key-file=/etc/kubernetes/ssl/node-node3-key.pem \
--anonymous-auth=false \
--cgroup-driver=cgroupfs \
--cgroups-per-qos=True \
--fail-swap-on=False \
--enforce-node-allocatable=""  --cluster-dns=10.233.0.3 --cluster-domain=cluster.local --resolv-conf=/etc/resolv.conf --kubeconfig=/etc/kubernetes/node-kubeconfig.yaml --require-kubeconfig --kube-reserved cpu=100m,memory=256M --node-labels=node-role.kubernetes.io/node=true  --feature-gates=Initializers=true,PersistentLocalVolumes=False  "
KUBELET_NETWORK_PLUGIN="--network-plugin=cni --network-plugin-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
KUBELET_CLOUDPROVIDER=""

PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

3、/etc/kubernetes/node-kubeconfig.yaml

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem
    server: https://localhost:6443
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/node-node3.pem
    client-key: /etc/kubernetes/ssl/node-node3-key.pem
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-cluster.local
current-context: kubelet-cluster.local

4、启动kubelet

systemctl start kubelet && systemctl enable kubelet
[root@node1 ~]# ss -tnl
State       Recv-Q Send-Q                                                   Local Address:Port                                                                  Peer Address:Port              
LISTEN      0      128                                                      192.168.1.123:10250                                                                            *:*                  
LISTEN      0      128                                                      192.168.1.123:2379                                                                             *:*                  
LISTEN      0      128                                                          127.0.0.1:2379                                                                             *:*                  
LISTEN      0      128                                                      192.168.1.123:2380                                                                             *:*                  
LISTEN      0      128                                                      192.168.1.123:10255                                                                            *:*                  
LISTEN      0      128                                                                  *:22                                                                               *:*                  
LISTEN      0      100                                                          127.0.0.1:25                                                                               *:*                  
LISTEN      0      128                                                          127.0.0.1:10248                                                                            *:*                  
LISTEN      0      128                                                                 :::22                                                                              :::*                  
LISTEN      0      100                                                                ::1:25                                                                              :::*

四、配置kube-proxy,apiserver,scheduler,controller-manager

1、/etc/kubernetes/kube-proxy-kubeconfig.yaml

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem
    server: https://localhost:6443
users:
- name: kube-proxy
  user:
    client-certificate: /etc/kubernetes/ssl/kube-proxy-node3.pem
    client-key: /etc/kubernetes/ssl/kube-proxy-node3-key.pem
contexts:
- context:
    cluster: local
    user: kube-proxy
  name: kube-proxy-cluster.local
current-context: kube-proxy-cluster.local

2、/etc/kubernetes/manifests/kube-proxy.manifest

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
  labels:
    k8s-app: kube-proxy
  annotations:
    kubespray.kube-proxy-cert/serial: "DBA85609D00B0FAE"
spec:
  hostNetwork: true
  dnsPolicy: ClusterFirst
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.8.3_coreos.0
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        cpu: 500m
        memory: 2000M
      requests:
        cpu: 150m
        memory: 64M
    command:
    - /hyperkube
    - proxy
    - --v=2
    - --kubeconfig=/etc/kubernetes/kube-proxy-kubeconfig.yaml
    - --bind-address=192.168.1.126
    - --cluster-cidr=10.233.64.0/18
    - --proxy-mode=iptables
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
    - mountPath: "/etc/kubernetes/ssl"
      name: etc-kube-ssl
      readOnly: true
    - mountPath: "/etc/kubernetes/kube-proxy-kubeconfig.yaml"
      name: kubeconfig
      readOnly: true
    - mountPath: /var/run/dbus
      name: var-run-dbus
      readOnly: false
  volumes:
  - name: ssl-certs-host
    hostPath:
      path: /etc/pki/tls
  - name: etc-kube-ssl
    hostPath:
      path: "/etc/kubernetes/ssl"
  - name: kubeconfig
    hostPath:
      path: "/etc/kubernetes/kube-proxy-kubeconfig.yaml"
  - name: var-run-dbus
    hostPath:
      path: /var/run/dbus

3、/etc/nginx/nginx.conf 

error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1024;
}

stream {
        upstream kube_apiserver {
            least_conn;
            server 192.168.1.121:6443;
            server 192.168.1.122:6443;
                    }

        server {
            listen        127.0.0.1:6443;
            proxy_pass    kube_apiserver;
            proxy_timeout 10m;
            proxy_connect_timeout 1s;

        }

}

4、/etc/kubernetes/manifests/nginx-proxy.yml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-proxy
  namespace: kube-system
  labels:
    k8s-app: kube-nginx
spec:
  hostNetwork: true
  containers:
  - name: nginx-proxy
    image: nginx:1.11.4-alpine
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        cpu: 300m
        memory: 512M
      requests:
        cpu: 25m
        memory: 32M
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/nginx
      name: etc-nginx
      readOnly: true
  volumes:
  - name: etc-nginx
    hostPath:
      path: /etc/nginx

四、验证

1、docker ps

[root@node3 ~]# docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
6597dd5a1ce1        00bc1e841a8f                               "nginx -g 'daemon ..."   9 seconds ago       Up 9 seconds                            k8s_nginx-proxy_nginx-proxy-node3_kube-system_768ecc5f8a5c2500c7b1d97c4351756d_0
fc03ac3c0887        gcr.io/google_containers/pause-amd64:3.0   "/pause"                 10 seconds ago      Up 9 seconds                            k8s_POD_nginx-proxy-node3_kube-system_768ecc5f8a5c2500c7b1d97c4351756d_0
6d7ae2b0e831        bd322856b660                               "/hyperkube proxy ..."   5 minutes ago       Up 5 minutes                            k8s_kube-proxy_kube-proxy-node3_kube-system_6e62ad1c50c542344a458bc75eef02f7_0
2a8382b3d714        gcr.io/google_containers/pause-amd64:3.0   "/pause"                 5 minutes ago       Up 5 minutes                            k8s_POD_kube-proxy-node3_kube-system_6e62ad1c50c542344a458bc75eef02f7_0
c3befa316f36        quay.io/coreos/etcd:v3.2.4                 "/usr/local/bin/etcd"    About an hour ago   Up About an hour                        etcd3

2、ss -tnl

[root@node3 ~]# ss -tnl
State       Recv-Q Send-Q                                                   Local Address:Port                                                                  Peer Address:Port              
LISTEN      0      128                                                      192.168.1.125:10250                                                                            *:*                  
LISTEN      0      128                                                          127.0.0.1:6443                                                                             *:*                  
LISTEN      0      128                                                      192.168.1.125:2379                                                                             *:*                  
LISTEN      0      128                                                          127.0.0.1:2379                                                                             *:*                  
LISTEN      0      128                                                      192.168.1.125:2380                                                                             *:*                  
LISTEN      0      128                                                      192.168.1.125:10255                                                                            *:*                  
LISTEN      0      128                                                                  *:22                                                                               *:*                  
LISTEN      0      100                                                          127.0.0.1:25                                                                               *:*                  
LISTEN      0      128                                                          127.0.0.1:10248                                                                            *:*                  
LISTEN      0      128                                                          127.0.0.1:10249                                                                            *:*                  
LISTEN      0      128                                                                 :::10256                                                                           :::*                  
LISTEN      0      128                                                                 :::22                                                                              :::*                  
LISTEN      0      100                                                                ::1:25

3、kubectl get node   #在Master上运行,除非你配置了

[root@node1 ~]# kubectl get node
NAME      STATUS     ROLES         AGE       VERSION
node1     NotReady   master        1h        v1.8.3+coreos.0
node2     NotReady   master,node   1h        v1.8.3+coreos.0
node3     NotReady   node          1m        v1.8.3+coreos.0


你可能感兴趣的:(手动搭建Kubernetes1.8高可用集群(5)Node)