Kubernetes入门学习-十八-dashboard认证及分级授权

最近学习k8s遇到很多问题,建了一个qq群:153144292,交流devops、k8s、docker等

Kubernetes dashboard认证及分级授权

认证、授权

API server:

Subject-->action-->object

认证

Token、tls、user/password

账号:UserAccount、ServiceAccount

授权

RBAC

role、rolebinding

clusterrole、clusterrolebinding

 

rolebinding、clusterrolebinding:

subject:

       user

       group

           serviceaccount

        role

 

 role、clusterrole

subject:

   resouce group

   resource

   non-resource url

 

action:get、list、watch、patch、delete、deletecollection、.........

 

角色应用:dashboard

项目:https://github.com/kubernetes/dashboard

认证代理,所有账号,所有授权都是k8s的账号和授权

一、部署

1、直接使用社区连接部署

[root@master manifests]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

secret/kubernetes-dashboard-certs created

serviceaccount/kubernetes-dashboard created

role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

deployment.apps/kubernetes-dashboard created

service/kubernetes-dashboard created

这里有坑,被墙了

 

2、下载yaml和images安装

上面的安装方式不灵

可以使用

docker search kubernetes-dashboard-amd64

docker pull siriuszg/kubernetes-dashboard-amd64

docker tag siriuszg/kubernetes-dashboard-amd64:latest k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

然后wget yaml文件

[root@master manifests]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

修改一下

[root@master manifests]# cat kubernetes-dashboard.yaml

# Copyright 2017 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

 

# ------------------- Dashboard Secret ------------------- #

 

apiVersion: v1

kind: Secret

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard-certs

  namespace: kube-system

type: Opaque

 

---

# ------------------- Dashboard Service Account ------------------- #

 

apiVersion: v1

kind: ServiceAccount

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

 

---

# ------------------- Dashboard Role & Role Binding ------------------- #

 

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: kubernetes-dashboard-minimal

  namespace: kube-system

rules:

  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.

- apiGroups: [""]

  resources: ["secrets"]

  verbs: ["create"]

  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

  resources: ["configmaps"]

  verbs: ["create"]

  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

  resources: ["secrets"]

  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]

  verbs: ["get", "update", "delete"]

  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

  resources: ["configmaps"]

  resourceNames: ["kubernetes-dashboard-settings"]

  verbs: ["get", "update"]

  # Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

  resources: ["services"]

  resourceNames: ["heapster"]

  verbs: ["proxy"]

- apiGroups: [""]

  resources: ["services/proxy"]

  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]

  verbs: ["get"]

 

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: kubernetes-dashboard-minimal

  namespace: kube-system

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

  name: kubernetes-dashboard

  namespace: kube-system

 

---

# ------------------- Dashboard Deployment ------------------- #

 

kind: Deployment

apiVersion: apps/v1

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  replicas: 1

  revisionHistoryLimit: 10

  selector:

    matchLabels:

      k8s-app: kubernetes-dashboard

  template:

    metadata:

      labels:

        k8s-app: kubernetes-dashboard

    spec:

      containers:

      - name: kubernetes-dashboard

        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

        imagePullPolicy: IfNotPresent       这里是我添加的,永远使用本地

        ports:

        - containerPort: 8443

          protocol: TCP

        args:

          - --auto-generate-certificates

          # Uncomment the following line to manually specify Kubernetes API server Host

          # If not specified, Dashboard will attempt to auto discover the API server and connect

          # to it. Uncomment only if the default does not work.

          # - --apiserver-host=http://my-address:port

        volumeMounts:

        - name: kubernetes-dashboard-certs

          mountPath: /certs

          # Create on-disk volume to store exec logs

        - mountPath: /tmp

          name: tmp-volume

        livenessProbe:

          httpGet:

            scheme: HTTPS

            path: /

            port: 8443

          initialDelaySeconds: 30

          timeoutSeconds: 30

      volumes:

      - name: kubernetes-dashboard-certs

        secret:

          secretName: kubernetes-dashboard-certs

      - name: tmp-volume

        emptyDir: {}

      serviceAccountName: kubernetes-dashboard

      # Comment the following tolerations if Dashboard must not be deployed on master

      tolerations:

      - key: node-role.kubernetes.io/master

        effect: NoSchedule

 

---

# ------------------- Dashboard Service ------------------- #

 

kind: Service

apiVersion: v1

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  ports:

    - port: 443

      targetPort: 8443

  selector:

k8s-app: kubernetes-dashboard

[root@master manifests]# kubectl apply -f kubernetes-dashboard.yaml

secret/kubernetes-dashboard-certs created

serviceaccount/kubernetes-dashboard created

role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

deployment.apps/kubernetes-dashboard created

service/kubernetes-dashboard created

[root@master manifests]# kubectl get pods -n kube-system

NAME                                   READY   STATUS    RESTARTS   AGE

coredns-86c58d9df4-fltl9               1/1     Running   2          24d

coredns-86c58d9df4-kdm2h               1/1     Running   2          24d

etcd-master                            1/1     Running   3          24d

kube-apiserver-master                  1/1     Running   2          24d

kube-controller-manager-master         1/1     Running   2          24d

kube-flannel-ds-amd64-bf7s9            1/1     Running   0          37m

kube-flannel-ds-amd64-vm9vw            1/1     Running   1          37m

kube-flannel-ds-amd64-xqcg9            1/1     Running   0          36m

kube-proxy-6fp6m                       1/1     Running   1          24d

kube-proxy-wv6gg                       1/1     Running   0          24d

kube-proxy-zndjb                       1/1     Running   2          24d

kube-scheduler-master                  1/1     Running   2          24d

kubernetes-dashboard-57df4db6b-cwr44   1/1     Running   0          10s   这里成功了

 

[root@master manifests]# kubectl get svc -n kube-system

NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE

kube-dns               ClusterIP   10.96.0.10              53/UDP,53/TCP   24d

kubernetes-dashboard   ClusterIP   10.102.254.30           443/TCP         5m40s

 

这里修改为NodePort

[root@master manifests]# kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system

service/kubernetes-dashboard patched

或者使用 跟上面的命令一个效果

[root@master manifests]# kubectl -n kube-system edit service kubernetes-dashboard

 

[root@master manifests]# kubectl get svc -n kube-system

NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE

kube-dns               ClusterIP   10.96.0.10              53/UDP,53/TCP   24d

kubernetes-dashboard   NodePort    10.102.254.30           443:30376/TCP   12m

查看docker容器log发现报错,

2019/03/25 13:34:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

2019/03/25 13:34:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

2019/03/25 13:34:56 http: TLS handshake error from 10.244.2.1:53815: tls: first record does not look like a TLS handshake

2019/03/25 13:35:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

2019/03/25 13:35:06 http: TLS handshake error from 10.244.2.1:53816: EOF

2019/03/25 13:35:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

2019/03/25 13:36:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

2019/03/25 13:36:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

 

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml

 

[root@master manifests]# cat  heapster.yaml | grep image

        image: k8s.gcr.io/heapster-amd64:v1.5.4

        imagePullPolicy: IfNotPresent

[root@master manifests]# cat  grafana.yaml | grep image

        image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4

[root@master manifests]# cat influxdb.yaml | grep image

        image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2

 

docker pull mirrorgooglecontainers/heapster-grafana-amd64:v5.0.4

docker pull mirrorgooglecontainers/heapster-amd64:v1.5.4

docker pull mirrorgooglecontainers/heapster-influxdb-amd64:v1.5.2

 

docker tag mirrorgooglecontainers/heapster-grafana-amd64:v5.0.4  k8s.gcr.io/heapster-grafana-amd64:v5.0.4

docker tag mirrorgooglecontainers/heapster-amd64:v1.5.4 k8s.gcr.io/heapster-amd64:v1.5.4

docker tag mirrorgooglecontainers/heapster-influxdb-amd64:v1.5.2 k8s.gcr.io/heapster-influxdb-amd64:v1.5.2

[root@master kubernetes-dashboard]# ls

grafana.yaml  heapster-rbac.yaml  heapster.yaml  influxdb.yaml

[root@master kubernetes-dashboard]# ll

total 16

-rw-r--r--. 1 root root 2276 Mar 25 09:38 grafana.yaml

-rw-r--r--. 1 root root  264 Mar 25 09:44 heapster-rbac.yaml

-rw-r--r--. 1 root root 1100 Mar 25 09:44 heapster.yaml

-rw-r--r--. 1 root root  960 Mar 25 09:44 influxdb.yaml

[root@master kubernetes-dashboard]# kubectl create -f .

deployment.extensions/monitoring-grafana created

service/monitoring-grafana created

clusterrolebinding.rbac.authorization.k8s.io/heapster created

serviceaccount/heapster created

deployment.extensions/heapster created

service/heapster created

deployment.extensions/monitoring-influxdb created

service/monitoring-influxdb created

[root@master kubernetes-dashboard]# kubectl get pods -n kube-system

NAME                                   READY   STATUS    RESTARTS   AGE

coredns-86c58d9df4-fltl9               1/1     Running   2          25d

coredns-86c58d9df4-kdm2h               1/1     Running   2          25d

etcd-master                            1/1     Running   3          25d

heapster-f64999bc-pcxgw                1/1     Running   0          6s

kube-apiserver-master                  1/1     Running   2          25d

kube-controller-manager-master         1/1     Running   2          25d

kube-flannel-ds-amd64-bf7s9            1/1     Running   0          23h

kube-flannel-ds-amd64-vm9vw            1/1     Running   1          23h

kube-flannel-ds-amd64-xqcg9            1/1     Running   0          23h

kube-proxy-6fp6m                       1/1     Running   1          25d

kube-proxy-wv6gg                       1/1     Running   0          25d

kube-proxy-zndjb                       1/1     Running   2          25d

kube-scheduler-master                  1/1     Running   2          25d

kubernetes-dashboard-57df4db6b-cr5qw   1/1     Running   0          20m

monitoring-grafana-564f579fd4-c22mj    1/1     Running   0          7s

monitoring-influxdb-8b7d57f5c-nbwhk    1/1     Running   0          7s

发现还是有问题

 

修改 yaml 文件

# heapster.yaml 文件

#### 修改如下部分 #####

因为 kubelet 启用了 https 所以如下配置需要增加 https 端口

        - --source=kubernetes:https://kubernetes.default

修改为

        - --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true

 

# heapster-rbac.yaml  文件

#### 修改为部分 #####

将 serviceAccount kube-system:heapster 与 ClusterRole system:kubelet-api-admin 绑定,授予它调用 kubelet API 的权限;

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

  name: heapster

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: system:heapster

subjects:

- kind: ServiceAccount

  name: heapster

  namespace: kube-system

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

  name: heapster-kubelet-api

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: system:kubelet-api-admin

subjects:

- kind: ServiceAccount

  name: heapster

  namespace: kube-system

 

[root@master-47-35 heapster]# kubectl create -f .

deployment.extensions/monitoring-grafana created

service/monitoring-grafana created

clusterrolebinding.rbac.authorization.k8s.io/heapster created

clusterrolebinding.rbac.authorization.k8s.io/heapster-kubelet-api created

serviceaccount/heapster created

deployment.extensions/heapster created

service/heapster created

deployment.extensions/monitoring-influxdb created

service/monitoring-influxdb created

[root@master-47-35 heapster]# kubectl logs -f heapster-7797bb6dd4-jvs6n -n kube-system

I0827 02:25:20.229353       1 heapster.go:78] /heapster --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086

I0827 02:25:20.229399       1 heapster.go:79] Heapster version v1.5.4

I0827 02:25:20.230543       1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version v1

I0827 02:25:20.230569       1 configs.go:62] Using kubelet port 10250

 

[root@node02 ~]# docker logs ee39c9d6530a

2019/03/25 14:01:01 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.

2019/03/25 14:01:31 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.

2019/03/25 14:02:01 Metric client health check failed: an error on the server ("[-]healthz failed: could not get the latest data batch\nhealthz check failed") has prevented the request from succeeding (get services heapster). Retrying in 30 seconds.

2019/03/25 14:02:31 Successful request to heapster

 

理论到这里输入任何node ip:30255就可以访问了

 

在k8s中 dashboard可以有两种访问方式:kubeconfig(HTTPS)和token(http)

3、token认证

 (1)创建dashboard专用证书

[root@master kubernetes-dashboard]# cd /etc/kubernetes/pki

[root@master pki]# ll

total 68

-rw-r--r--. 1 root root 1216 Feb 28 06:23 apiserver.crt

-rw-r--r--. 1 root root 1090 Feb 28 06:23 apiserver-etcd-client.crt

-rw-------. 1 root root 1679 Feb 28 06:23 apiserver-etcd-client.key

-rw-------. 1 root root 1675 Feb 28 06:23 apiserver.key

-rw-r--r--. 1 root root 1099 Feb 28 06:23 apiserver-kubelet-client.crt

-rw-------. 1 root root 1679 Feb 28 06:23 apiserver-kubelet-client.key

-rw-r--r--. 1 root root 1025 Feb 28 06:23 ca.crt

-rw-------. 1 root root 1675 Feb 28 06:23 ca.key

drwxr-xr-x. 2 root root  162 Feb 28 06:23 etcd

-rw-r--r--. 1 root root 1038 Feb 28 06:23 front-proxy-ca.crt

-rw-------. 1 root root 1675 Feb 28 06:23 front-proxy-ca.key

-rw-r--r--. 1 root root 1058 Feb 28 06:23 front-proxy-client.crt

-rw-------. 1 root root 1679 Feb 28 06:23 front-proxy-client.key

-rw-------. 1 root root 1675 Feb 28 06:23 sa.key

-rw-------. 1 root root  451 Feb 28 06:23 sa.pub

-rw-r--r--. 1 root root  973 Mar 19 12:07 wolf.crt

-rw-r--r--. 1 root root  883 Mar 19 12:06 wolf.csr

-rw-------. 1 root root 1679 Mar 19 12:04 wolf.key

[root@master pki]# (umask 077;openssl genrsa -out dashboard.key 2048)

Generating RSA private key, 2048 bit long modulus

.............................................................................................................................................+++

...................................+++

e is 65537 (0x10001)

[root@master pki]# openssl req -new -key dashboard.key -out dashboard.csr -subj "/O=magedu/CN=dashboard"

[root@master pki]# openssl x509 -req -in dashboard.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out dashboard.crt -days 365

Signature ok

subject=/O=magedu/CN=dashboard

Getting CA Private Key

[root@master pki]# kubectl create secret generic dashboard-cert -n kube-system --from-file=./dashboard.crt --from-file=dashboard.key=./dashboard.key

secret/dashboard-cert created

[root@master pki]# kubectl get secret -n kube-system |grep dashboard

dashboard-cert                                   Opaque                                2      8s

kubernetes-dashboard-certs                       Opaque                                0      100m

kubernetes-dashboard-key-holder                  Opaque                                2      24h

kubernetes-dashboard-token-jdw29                 kubernetes.io/service-account-token   3      100m

[root@master pki]# kubectl create serviceaccount def-ns-admin -n default

serviceaccount/def-ns-admin created

[root@master pki]# kubectl create rolebinding def-ns-admin --clusterrole=admin --serviceaccount=default:def-ns-admin

rolebinding.rbac.authorization.k8s.io/def-ns-admin created

[root@master pki]# kubectl get secret

NAME                       TYPE                                  DATA   AGE

admin-token-68nnz          kubernetes.io/service-account-token   3      5d23h

def-ns-admin-token-m5hhd   kubernetes.io/service-account-token   3      3m41s

default-token-6q28w        kubernetes.io/service-account-token   3      25d

mysa-token-hb6lq           kubernetes.io/service-account-token   3      5d23h

mysecret                   Opaque                                2      8d

mysecret-1                 Opaque                                2      8d

mysecret2                  Opaque                                2      8d

tomcat-ingress-secret      kubernetes.io/tls                     2      11d

[root@master pki]# kubectl describe secret def-ns-admin-token-m5hhd

Name:         def-ns-admin-token-m5hhd

Namespace:    default

Labels:       

Annotations:  kubernetes.io/service-account.name: def-ns-admin

              kubernetes.io/service-account.uid: 5712af5c-4f10-11e9-bca0-a0369f95b76e

 

Type:  kubernetes.io/service-account-token

 

Data

====

ca.crt:     1025 bytes

namespace:  7 bytes

token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi1tNWhoZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1NzEyYWY1Yy00ZjEwLTExZTktYmNhMC1hMDM2OWY5NWI3NmUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.Pek2gNquzGfO07MZeZcmZqGlTf3osL0R1x1qNRSPwL25mrTkyU9TLmBQ0nPKkNFmHfQFqz9thJ9BgG5ANRrZ2USYXJy4so9wjk0pPLtk3RaDh-Y6xLta4wjVeGBRJIMf2NLCgAK3JoM6CIc3OawBzjlJR8Y0R2ea_HtN_X48uxciAmzfuxpUG-QdU5Bxt3AEccSTrSTyVKxhx2fJfYFwo-zyJfkznxDfuClvF6aG-h0mztMlvVZArGzxFSjb9O-ZHoyPzymYHmjC9LP6K0KdcIhnxkMFJl-VvJt4khVXoy_6a8MCrOAyG2hk4YQtCuMgkPpG806CuW8a-56Fj8LHUg

将该token复制后,填入验证,要知道的是,该token认证仅可以查看default名称空间的内容,如下图:

Kubernetes入门学习-十八-dashboard认证及分级授权_第1张图片

4、kubeconfig认证 

(1)配置def-ns-admin的集群信息

[root@master pki]# kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://10.249.6.100:6443" --embed-certs=true --kubeconfig=/root/def-ns-admin.conf

Cluster "kubernetes" set.

  1. 使用token写入集群验证

kubectl config set-credentials -h  #认证的方式可以通过crt和key文件,也可以使用token进行配置,这里使用tonken

 

Usage:

  kubectl config set-credentials NAME [--client-certificate=path/to/certfile] [--client-key=path/to/keyfile]

[--token=bearer_token] [--username=basic_user] [--password=basic_password] [--auth-provider=provider_name]

[--auth-provider-arg=key=value] [options]

[root@master pki]# kubectl describe secret def-ns-admin-token-m5hhd

Name:         def-ns-admin-token-m5hhd

Namespace:    default

Labels:       

Annotations:  kubernetes.io/service-account.name: def-ns-admin

              kubernetes.io/service-account.uid: 5712af5c-4f10-11e9-bca0-a0369f95b76e

 

Type:  kubernetes.io/service-account-token

 

Data

====

token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi1tNWhoZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1NzEyYWY1Yy00ZjEwLTExZTktYmNhMC1hMDM2OWY5NWI3NmUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.Pek2gNquzGfO07MZeZcmZqGlTf3osL0R1x1qNRSPwL25mrTkyU9TLmBQ0nPKkNFmHfQFqz9thJ9BgG5ANRrZ2USYXJy4so9wjk0pPLtk3RaDh-Y6xLta4wjVeGBRJIMf2NLCgAK3JoM6CIc3OawBzjlJR8Y0R2ea_HtN_X48uxciAmzfuxpUG-QdU5Bxt3AEccSTrSTyVKxhx2fJfYFwo-zyJfkznxDfuClvF6aG-h0mztMlvVZArGzxFSjb9O-ZHoyPzymYHmjC9LP6K0KdcIhnxkMFJl-VvJt4khVXoy_6a8MCrOAyG2hk4YQtCuMgkPpG806CuW8a-56Fj8LHUg

ca.crt:     1025 bytes

namespace:  7 bytes

 

这里的token是base64编码,此处需要进行解码操作

[root@master pki]# kubectl get secret def-ns-admin-token-m5hhd -o jsonpath={.data.token} |base64 -d

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi1tNWhoZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1NzEyYWY1Yy00ZjEwLTExZTktYmNhMC1hMDM2OWY5NWI3NmUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.Pek2gNquzGfO07MZeZcmZqGlTf3osL0R1x1qNRSPwL25mrTkyU9TLmBQ0nPKkNFmHfQFqz9thJ9BgG5ANRrZ2USYXJy4so9wjk0pPLtk3RaDh-Y6xLta4wjVeGBRJIMf2NLCgAK3JoM6CIc3OawBzjlJR8Y0R2ea_HtN_X48uxciAmzfuxpUG-QdU5Bxt3AEccSTrSTyVKxhx2fJfYFwo-zyJfkznxDfuClvF6aG-h0mztMlvVZArGzxFSjb9O-ZHoyPzymYHmjC9LP6K0KdcIhnxkMFJl-VvJt4khVXoy_6a8MCrOAyG2hk4YQtCuMgkPpG806CuW8a-56Fj8LHUg[root@master pki]#

配置token信息

[root@master pki]# kubectl config set-credentials def-ns-admin --token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi1tNWhoZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1NzEyYWY1Yy00ZjEwLTExZTktYmNhMC1hMDM2OWY5NWI3NmUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.Pek2gNquzGfO07MZeZcmZqGlTf3osL0R1x1qNRSPwL25mrTkyU9TLmBQ0nPKkNFmHfQFqz9thJ9BgG5ANRrZ2USYXJy4so9wjk0pPLtk3RaDh-Y6xLta4wjVeGBRJIMf2NLCgAK3JoM6CIc3OawBzjlJR8Y0R2ea_HtN_X48uxciAmzfuxpUG-QdU5Bxt3AEccSTrSTyVKxhx2fJfYFwo-zyJfkznxDfuClvF6aG-h0mztMlvVZArGzxFSjb9O-ZHoyPzymYHmjC9LP6K0KdcIhnxkMFJl-VvJt4khVXoy_6a8MCrOAyG2hk4YQtCuMgkPpG806CuW8a-56Fj8LHU --kubeconfig=/root/def-ns-admin.conf

User "def-ns-admin" set.

[root@master pki]#

  1. 配置上下文和当前上下文

[root@master pki]# kubectl config set-context def-ns-admin@kubernetes --cluster=kubernetes --user=def-ns-admin --kubeconfig=/root/def-ns-admin.con

Context "def-ns-admin@kubernetes" created.

[root@master pki]# kubectl config use-context def-ns-admin@kubernetes --kubeconfig=/root/def-ns-admin.conf

Switched to context "def-ns-admin@kubernetes".

[root@master pki]# kubectl config view --kubeconfig=/root/def-ns-admin.conf

apiVersion: v1

clusters:

- cluster:

    certificate-authority-data: DATA+OMITTED

    server: https://10.249.6.100:6443

  name: kubernetes

contexts:

- context:

    cluster: kubernetes

    user: def-ns-admin

  name: def-ns-admin@kubernetes

current-context: def-ns-admin@kubernetes

kind: Config

preferences: {}

users:

- name: def-ns-admin

  user:

token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi1tNWhoZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1NzEyYWY1Yy00ZjEwLTExZTktYmNhMC1hMDM2OWY5NWI3NmUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.Pek2gNquzGfO07MZeZcmZqGlTf3osL0R1x1qNRSPwL25mrTkyU9TLmBQ0nPKkNFmHfQFqz9thJ9BgG5ANRrZ2USYXJy4so9wjk0pPLtk3RaDh-Y6xLta4wjVeGBRJIMf2NLCgAK3JoM6CIc3OawBzjlJR8Y0R2ea_HtN_X48uxciAmzfuxpUG-QdU5Bxt3AEccSTrSTyVKxhx2fJfYFwo-zyJfkznxDfuClvF6aG-h0mztMlvVZArGzxFSjb9O-ZHoyPzymYHmjC9LP6K0KdcIhnxkMFJl-VvJt4khVXoy_6a8MCrOAyG2hk4YQtCuMgkPpG806CuW8a-56Fj8LHU

 

二、总结

1、部署dashboard:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

2、将Service改为Node Port方式进行访问:

kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system

3、访问认证:

认证时的账号必须为ServiceAccount:其作用是被dashboard pod拿来由kubenetes进行认证;认证方式有2种:
token:

  • (1)创建ServiceAccount,根据其管理目标,使用rolebinding或clusterbinding绑定至合理的role或clusterrole;
  • (2)获取此ServiceAccount的secret,查看secret的详细信息,其中就有token;
  • (3)复制token到认证页面即可登录

kubeconfig:把ServiceAccount的token封装为kubeconfig文件

  • (1)创建ServiceAccount,根据其管理目标,使用rolebinding或clusterbinding绑定至合理的role或clusterrole;
  • (2)kubectl get secret |awk '/^ServiceAccount/{print $1}'
  • KUBE_TOKEN=$(kubectl get secret SERVICEACCOUNT_SECRET_NAME -o jsonpath={.data.token} | base64 -d)
  • (3)生成kubeconfig文件

kubectl config set-cluster

kubectl config set-credentials NAME --token=$KUBE_TOKEN

kubectl config set-context

kubectl config use-context

 

 

 

你可能感兴趣的:(devops,kubenetes)