k8s安全机制

目录

一、kubernetes安全机制

二、Authentication认证

2.1 用户分类

2.2 kubeconfig文件

2.3 Authorization授权

2.4 授权模块

2.5 RBAC

2.5.1  角色

2.5.2 角色绑定

2.5.3 主体 (subject)

2.6  Admission Control准入控制

2.7 LimitRanger示例

2.8 ResourceQuota示例

三、User访问案例 

3.1 创建k8s用户(User)

3.1.1  准备工作

3.1.2  创建证书

3.1.3 创建用户配置文件 

3.2 创建系统用户 

3.3 验证用户访问k8s资源

3.4 创建role 

3.5 创建rolebinding 

3.6 再验证用户访问k8s资源 

3.7 创建ClusterRole

3.8 创建ClusterRoleBinding 

3.9 再验证用户访问k8s资源 

四、ServiceAccount访问案例 

4.1 创建namespace

4.2 验证namespace的sa与secret 

4.3 创建pod验证其sa与secret 

4.4 创建一个sa 

4.5 创建pod并自定义sa 

4.6 验证pod使用的sa 

一、kubernetes安全机制

api server是k8s集群的入口,默认有两个端口:

  • 本地端口8080: 用于接收HTTP请求, 不对外服务, 非认证或授权的HTTP请求通过该端口访问API Server

  • 安全端口6443: 用于接收认证授权的HTTPS请求,对外服务。

用户通过安全端口访问k8s的api server需要过三关:认证、授权、准入控制

  • Authentication认证: 用于识别用户身份, 方式有: SSL证书,token, 用户名+密码等
  • Authorization授权: 确认是否对资源具有相关的权限
  • Admission Control准入控制: 判断操作是否符合集群的要求

k8s安全机制_第1张图片

无论kubectl客户端命令或api或ui,在创建或访问资源时,一定要有apiServer所对应的资源版本,只要资源版本与kind能够正确匹配才能进行继续操作,否侧操作会被终止。

二、Authentication认证

Authentication认证: 用于识别用户身份, 验证方式有:

  • SSL证书
  • token
  • 用户名+密码

2.1 用户分类

用户有两种:

  • 用户账户(user) : 是在集群外部访问apiserver时使用的用户,如kubectl命令就是作为kubernetes的admin用户来执行的。
  • 服务账户(ServiceAccount): 为了方便Pod里面的进程调用Kubernetes API或其他外部服务而设计的。
  • User account是为人设计的,而service account则是为Pod中的进程调用Kubernetes API而设计;
  • User account是跨namespace的,而service account则是仅局限它所在的namespace;
  • 每个namespace都会自动创建一个default service account

参考: https://www.kubernetes.org.cn/service-account

2.2 kubeconfig文件

  • kubeconfig文件用于组织有关群集、用户、命名空间和身份验证机制的信息,用于对k8s集群的访问。
  • 使用kubectl命令时默认使用~/.kube/config这个kubeconfig文件来访问k8s集群,也可以用--kubeconfig指定其它文件

一个kubeconfig文件通常由以下几个配置段组成:

  • users: 用户账号及其认证信息列表
  • cluster: 目标集群列表
  • contexts: 以哪个user接入哪个cluster的连接组合。
  • current-context:当前使用的context

示例: 查看admin用户的kubeconfig文件

[root@master1 ~]# cat /root/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1RENDQXFDZ0F3SUJBZ0lVRXJCMXhTc2NDMUV3V2lkckc3UEJXT29HSkFFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qQXhNVEEwTVRNek5qQXdXaGdQTWpFeU1ERXdNVEV4TXpNMk1EQmFNR0V4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEREQUtCZ05WQkFvVApBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXpZNWhMTDhTYXU2MGVpNDR6RTlhNVdDdHVCc00KN0tmMXJkd2RnVVo2VFpRK3VFNkdhdmN5SWhCT3l2bFprVEt0WmhEcXdmSml3TjZBVk5tdTEvTktrVkZ2UEUrOQppOVF4Z0FkM0JsUk1FVi9nMUVrejZVRXpSM2JJSXdWd2QrUHRpZGZ3a1RtZ3E5bkR4MklpR3VMT1A4UnJoNmdFCmlkR1dLbERjZ0wrYnB6RUVieW4yTWpDWWY0dlZCUDlqS044WFdrM3pKdGlyMUhxOWx2akRXUEZrZStHMUZHdXEKKy9acy9jQk1Nb3JpSWhld1ZDb2t1ZGVVOVVPbUZSN0JObDFGOStBeWhWZHlmbE0yTFo1R2dQZmg2MFdBZkI2WApSa3o2dHMxcTJLR0FZdG5TeGhsblREamxpUWxWT0M2UURUeTBMNUgzV3lmdVdrOHlzWmVUNkxHQ3R3SURBUUFCCm8yWXdaREFPQmdOVkhROEJBZjhFQkFNQ0FRWXdFZ1lEVlIwVEFRSC9CQWd3QmdFQi93SUJBakFkQmdOVkhRNEUKRmdRVVRMNHErYkhUV2pMdmlVRk5BVmp4c1dTTDZuMHdId1lEVlIwakJCZ3dGb0FVVEw0cStiSFRXakx2aVVGTgpBVmp4c1dTTDZuMHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ2pZTXVLN2dDWlJmUWVPTW1SYVg5dmNjTldECjdOOVltb25XTHNZb0RiaGE2Und1VFlFVDRBWXJuUWE1d3pEV3pBK0Erb0liSjR4WjZnREZGK2s2KzM0V1ZreXcKLyt2NkFVL2hSNytjL2pIc3RGZGFRVXZGVHFhNUtmelcyOEZHQWptQ3ZOYyt1QjJ4ZjQvcy9EMnRwTWxpT3l6cwpKWEhNV3I3UWs5RHR3THJlZ2lZTlc3a202M25RbllWSTNRQzlIK0xNaVNsMHNIUlBER1VseU15N2hqWUNQR2duCkNEQlBWYWJhR0ltdW5YSnBGd2RKTUhmbFRuZllSUnQ3RG0zOVlXMjhWSGw4N0xlZXY5Y3dlVTI5KzNtYlJiUmwKME1tNlpoQzlsK0NjSXN6dEtUVWJBQ01LT1dYbUZXRk1uQVFZdnhxN2hIR25IaXpWb3k1VHNxVkZQRUE9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.122.11:6443
  name: cluster1
contexts:
- context:
    cluster: cluster1
    user: admin
  name: context-cluster1-admin
current-context: context-cluster1-admin
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVYUZlakwzVXErUTJqWnJIbzB4cmFaUVAyY3pZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qQXhNVEEwTVRNek9UQXdXaGdQTWpBM01ERXdNak14TXpNNU1EQmFNR2N4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEZ6QVZCZ05WQkFvVApEbk41YzNSbGJUcHRZWE4wWlhKek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweERqQU1CZ05WQkFNVEJXRmtiV2x1Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBNHdyc0dnSnE0T00zcUV5elE2Q3AKZGRjaWlUTnNuRG1odzBjcUwyWG1CTnA4cmxSUkFiUWY3QmNpTVRPVTF5TUdTYWo4YW9TNGhjUVFqcWpVRXRBKwphUVpOTXRPTXppZVZMMHpEZzMrV09Qb3o3MGdyeEU2QzFGVE5XcGFWRjljUUFWVFJNUS9ZZm9YYmNZRDdPT1dLCktCYldJUVJ2SzF2T0RiMmhEcUZBYkl5Q3N4dzI5ZFlyL3QwNDV2QVcrUTFWMjVIYjVrQVE2a1Z5WVFYcDZJSFQKdVZmaTRnVHlrclJDY1BaVUo5Q0ZCWFcvZFhrZkViZXpGYU9iU2VwVE5NU2FGeDBCNDE1c09CTFVYRTVuRG8wRwpNZ2duV015WTVUcW1UQU5CVGw5MTNySEdTeXdSZWRtQ2Z2Q1BiZSsxVnJFU2hiK3Z6aHZkMHZ5empNbk1NcDM3CkhRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGTlI5MWlJY2c5UVhiemlxOXRlQgpBZEdidnpnYk1COEdBMVVkSXdRWU1CYUFGRXkrS3ZteDAxb3k3NGxCVFFGWThiRmtpK3A5TUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQVhrNktieG1oUUdoeHd0TmZaU2txUG1tR21rK2U0OTJLRU1vSnZVcm5ZTTFaUkxmVk8KenRDbVByTWxORitIUWZySG5qczNtc1NzWmwwc1hxN3RwMnRKWEVURXFwdVdkMTBxQVZZVVBSelhrL1FLNDNXTAozYU13cWdZNjhkdGJOcytlNFpyT2J3YWJOZmp2T1RjUjVwV2RpTEFmT2hUSE9UZWtEZkUza1l4MGlQdGNGWmFTCmhVb3A5ZGV6RUpFbWNLM2xxQm1TYkN3cEhoSGZxUHlkcy8rVEhHMHdrV2E4a0pwSytxSUx6dkZDa0trRlZrMDcKcWtIMUthUHFzMUFKSmJzTWxMQjkzVEE0ajM1QWowTzF5eElpRjlDUmFoQ2pNeDlhaUl2dVNTejZpeGpMbDhWVgpnNzU0K2RzZThlMElaU0xHV2t6QzVDNnk0WWNid1ROa3dSU28KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBNHdyc0dnSnE0T00zcUV5elE2Q3BkZGNpaVROc25EbWh3MGNxTDJYbUJOcDhybFJSCkFiUWY3QmNpTVRPVTF5TUdTYWo4YW9TNGhjUVFqcWpVRXRBK2FRWk5NdE9NemllVkwwekRnMytXT1BvejcwZ3IKeEU2QzFGVE5XcGFWRjljUUFWVFJNUS9ZZm9YYmNZRDdPT1dLS0JiV0lRUnZLMXZPRGIyaERxRkFiSXlDc3h3Mgo5ZFlyL3QwNDV2QVcrUTFWMjVIYjVrQVE2a1Z5WVFYcDZJSFR1VmZpNGdUeWtyUkNjUFpVSjlDRkJYVy9kWGtmCkViZXpGYU9iU2VwVE5NU2FGeDBCNDE1c09CTFVYRTVuRG8wR01nZ25XTXlZNVRxbVRBTkJUbDkxM3JIR1N5d1IKZWRtQ2Z2Q1BiZSsxVnJFU2hiK3Z6aHZkMHZ5empNbk1NcDM3SFFJREFRQUJBb0lCQVFDUCtZSWxNRGxFYVBJSAplTFFKQTUzU2kxMXNNYjhEQm96cy8zamgyN1ZVaUpTZUcvdk9uOEhLZmhVQnJGc0xYanVEaEhTa3U0c2owR1JMCmd3WVE5QnRQa0Y1dUI3YjNtQ1JEZldOU1JSc25BSXJraHhpeUFMTjRJRnFrd203RFVtN2syVHRhNkZoNlN6b1EKTFNHaVAvVXZYT1BteGlBVkY3OS8rU3RqMHArKzlOMk5VT0NYSlV0MldUaFdyRDRBdHlBSG9hN3IvVnlxbGhKaQpLa3JaOURpSWh0cGp0bXorSlV1TXR0NHg4dWxvc2ZLZDBmeEtFTDdhamFsVXYxK1ZKZEc0YytvNUxjTEt6dWQzCk9QWW1aR1JPUmZHV3lSNUFNaTZmL09GU0F5dVU0MFB4Y2NrNEhRdlBkMUVPdndJVHhMc0lESy9TWUZtZDIrOTgKK1BwRkhoWUJBb0dCQU9qVVpHTGZOa2oxaFpZb0JDT2pieHM4VDZ5S0Mwc2lCNXo0VHlhdTNIRGhxNFRsbU13aQpLaVdVRWMrdmowd2NGblU4NWNsQjkwN2E4VWVnYzBsTHdHOFhuUUlYZHZvanB4Uk9oZ2pZZS90Vmk0SE5iMjJlCnoxNldsaGEyRWdESDBpQ1VsRERJVVIvVkN3N3VabzBwRnVkTjYxdXBybjQxU1M4cjBOVFVwQVdCQW9HQkFQbWoKR1ltanRMS04wY2hiNFFNejRTSW9DcEJPbGJhY1VxdzlxSGl6YzZrR0lwdDFZVFpDeTArcDVHSHJmbXpQczUrWgpVRFc4alFTdjU0R09sMUlnT2FNSTRLekk3YzhhZEExVjIzU09wSkZ4K1dlQTNQTThndUUrVzdnMzMwaWFJVXorClpxNDJyWC9YREVOWktrMHRESXdpSjRaN0VsTGsvaUxlTFJjS2NodWRBb0dBSnRraGN0blpuZXpSZ0U5bStXZFEKcDVPMW00MWcrd1ZQamVKa21vaThvVjBXR0ZPUzNpU3BNYUgySXRnaEtjclorRy9UeTNQbFJ0UURSRGRDWlMxdAppMXUrUjlZUWRRWXozNUNHanIxSVdwUllkYTBJZFpyMDhNeGhlS0tpZUZ6ZTg2NjIxempsU0ZGc3llMy9VSGhWCnFQeDRVOUFKSEdmWnNUUTNZUHVsN29FQ2dZQXdpdzdFUFJrK1AyV3oyR29BL1cyaC9yZFBLSGprY2JHQjNOdU4KM1JkNG5zRE1DaDlGTmVmakxkNURURS9oU09KYy9xc2lSL0FFSVZDYldPY0VDVkFyNitZRXMyM0xBQkhoSmZqdQp5VzRtK2tyTlhibVpjd0xnV2dmcjRyR0FwTGdsQ0xOZTNUTVAwUG1KZnlWQ3JLZnIrQmhIaGxjaDEzV3p3aEJ5CmtvMXQ0UUtCZ0I1Z08xQnNNYmpUNVZtSWJLRXVhMU03Q1BKcDVpZERNVGc2K0R2bVJKdDYwbmZrZGwyNGYzSlIKdHFwZFlXL2JyMVZsZzl1U2VFcnBXNEtTNXN0UkZPK3Q0Z212Rkx6TUJKdStoSSswN2pHUnBGanp1TDFPMVorOQpXZ0lxS2tKOW5DN3VSWEtxUmxBTXZTVVVjcUtLTUZSTWtBZEpmOUZHNEdVaHFSZUdhUlp2Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

2.3 Authorization授权

认证环节之后是授权, 确认是否对资源具有相关的权限

一个请求需要在其请求信息中包含用户名,请求的动作以及目标对象; 若存在某授权策略对于此请求给予了授权许可,即授权成功。

2.4 授权模块

  • Node: 根据Pod对象调度的结果为Node进行授权。
  • ABAC(Attribute-based access control): 基于属性的访问控制(ABAC)定义了一种访问控制模式,通过使用将属性组合在一起的策略,将访问权限授予用户。这些策略可以使用任何类型的属性(用户属性、资源属性、对象、环境属性等)。
  • RBAC(Role-based access control): 基于角色的访问控制(RBAC)是一种基于企业中各个用户的角色来管理对计算机或网络资源的访问的方法。使用"rbac.authorization.k8s.io" API驱动授权策略,并支持动态配置。
  • Webhook: WebHook其实就是一个HTTP回调:在发生某些事情时发生的HTTP POST;通过HTTP POST的简单事件通知。实现WebHooks的web应用程序将在发生某些事情时向URL发送消息。

2.5 RBAC

k8s安全机制_第2张图片

2.5.1  角色

  • Role 授权特定命名空间的访问权限

  • ClusterRole 授权所有命名空间的访问权限

2.5.2 角色绑定

  • RoleBinding 将角色绑定到主机(即subject)

  • ClusterRoleBinding 将集群角色绑定到主体

2.5.3 主体 (subject)

  • User 用户
  • Group 用户组
  • ServiceAccount 服务帐号

2.6  Admission Control准入控制

通过认证和鉴权之后,客户端并不能得到API Server的真正响应,这个请求还需通过Admission Control所控制的一个准入控制插件列表的层层考验。

准入控制器

Admission Control配备有一个“准入控制器”的插件列表,发送给API Server的任何请求都需要通过列表中每一个准入控制器的检查,检查不通过API Server拒绝此调用请求。

此外,准入控制器还能够修改请求参数以完成一些自动化的任务,比如Service Account这个控制器。

当前可配置的Admission Control准入控制如下:

  • AlwaysAdmit:允许所有请求;
  • AlwaysPullmages:在启动容器之前总去下载镜像,相当于在每个容器的配置项imagePullPolicy=Always
  • AlwaysDeny:禁止所有请求,一般用于测试;
  • DenyExecOnPrivileged:它会拦截所有想在Privileged Container上执行命令的请求,如果你的集群支持Privileged Container,你又希望限制用户在这些Privileged Container上执行命令,强烈推荐你使用它,其功能已经合并到DenyEscalatingExec中。
  • ImagePolicyWebhook:这个插件将允许后端的一个Webhook程序来完成admission controller的功能。ImagePolicyWebhook需要使用一个配置文件(通过kube-apiserver的启动参数–admission-control-config-file设置)定义后端Webhook的参数。目前该插件还处在Alpha版本。
  • Service Account:这个plug-in将ServiceAccount实现了自动化,默认启用,如果你想使用ServiceAccount对象,那么强烈推荐使用它。
  • SecurityContextDeny:这个插件将使用SecurityContext的Pod中的定义全部失效。SecurityContext在Container中定义了操作系统级别的安全设定(uid,gid,capabilityes,SELinux等)。在未启用PodSecurityPolicy的集群中建议启用该插件,以禁用容器设置的非安全访问权限。
  • ResourceQuota:用于资源配额管理目的,作用于namespace上,它会观察所有请求,确保在namespace上的配额不会超标。推荐在Admission Control参数列表中将这个插件安排在最后一个,以免可能被其他插件拒绝的Pod被过早分配资源。
  • LimitRanger:用于资源限制管理,作用于namespace上,确保对Pod进行资源限制。启用该插件还会为未设置资源限制的Pod进行默认设置,例如为namespace "default"中所有的Pod设置0.1CPU的资源请求。
  • InitialResources:是一个实验特性,旨在为未设置资源请求与限制的Pod,根据其镜像的历史资源的使用情况进行初始化的资源请求、限制设置。
  • NamespaceLifecycle:如果尝试在一个不存在的namespace中创建资源对象,则该创建请求将被拒绝。当删除一个namespace时,系统将会删除该namespace中所有对象,保存Pod,Service等。
  • DefaultStorageClass:为了实现共享存储的动态供应,为未指定StorageClass或PV的PVC尝试匹配默认的StorageClass,尽可能减少用户在申请PVC时所需了解的后端存储细节。
  • DefaultTolerationSeconds:这个插件为那些没有设置forgiveness tolerations并具有notready:NoExecute和unreachable:NoExecute两种taints的Pod设置默认的“容忍”时间,为5min。
  • PodSecurityPolicy:这个插件用于在创建或修改Pod时决定是否根据Pod的security context和可用的PodSecurityPolicy对Pod的安全策略进行控制。

2.7 LimitRanger示例

[root@master1 ~]# kubectl create ns limitrange
namespace/limitrange created
[root@master1 ~]# kubectl describe ns limitrange
Name:         limitrange
Labels:       
Annotations:  
Status:       Active

No resource quota.

No LimitRange resource.			看到无LimitRange
[root@master1 ~]# vim limitrange.yml
apiVersion: v1
kind: LimitRange
metadata:
  name: cpu-limit-range
  namespace: limitrange
spec:
  limits:
  - default:
      cpu: 1000m
    defaultRequest:
      cpu: 1000m
    min:
      cpu: 500m
    max:
      cpu: 2000m
    maxLimitRequestRatio:
      cpu: 4
    type: Container
[root@master1 ~]# kubectl apply -f limitrange.yml
limitrange/cpu-limit-range created
[root@master1 ~]# kubectl describe ns limitrange
Name:         limitrange
Labels:       
Annotations:  
Status:       Active

No resource quota.

Resource Limits
 Type       Resource  Min   Max  Default Request  Default Limit  Max Limit/Request Ratio
 ----       --------  ---   ---  ---------------  -------------  -----------------------
 Container  cpu       500m  2    1                1              4

[root@master1 ~]# vim limitrange-pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: limitrange
spec:
  containers:
  - name: c1
    image: nginx:1.15-alpine
    imagePullPolicy: IfNotPresent
    resources:
      requests:
        cpu: 200m						# cpu请求200m,小于namespace的cpu最小限制(500m)
[root@master1 ~]# kubectl apply -f limitrange-pod.yml
Error from server (Forbidden): error when creating "limitrange-pod.yml": pods "nginx" is forbidden: [minimum cpu usage per Container is 500m, but request is 200m, cpu max limit to request ratio per Container is 4, but provided ratio is 5.000000]

说明: 可看到namespace的cpu最小限制是500m,创建此pod请求为200m,准入控制拒绝了。 改成限制范围内就可以创建。

[root@master1 ~]# vim limitrange-pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: limitrange
spec:
  containers:
  - name: c1
    image: nginx:1.15-alpine
    imagePullPolicy: IfNotPresent
    resources:
      requests:	
        cpu: 600m							 #  改成大于500m
[root@master1 ~]# kubectl apply -f limitrange-pod.yml
pod/nginx created

[root@master1 ~]# kubectl get pods -n limitrange
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          11s		 可以成功创建pod

2.8 ResourceQuota示例

[root@master1 ~]# vim resourcequota.yml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: quota
  namespace: limitrange
spec:
  hard:						  #  硬限制
    pods: "1"				  #  限制此namespace里就不能超过1个pod
[root@master1 ~]# kubectl apply -f resourcequota.yml
resourcequota/quota created
[root@master1 ~]# kubectl describe ns limitrange
Name:         limitrange
Labels:       
Annotations:  
Status:       Active

Resource Quotas
 Name:     quota
 Resource  Used  Hard
 --------  ---   ---
 pods      1     1						# 硬限制为1个pod,已经有1个pod了

Resource Limits
 Type       Resource  Min   Max  Default Request  Default Limit  Max Limit/Request Ratio
 ----       --------  ---   ---  ---------------  -------------  -----------------------
 Container  cpu       500m  2    1                1              4
[root@master1 ~]# vim resourcequota-pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: nginx2								  # 改一个pod名
  namespace: limitrange						  # 相同的namespace
spec:
  containers:
  - name: c1
    image: nginx:1.15-alpine
    imagePullPolicy: IfNotPresent
[root@master1 ~]# kubectl apply -f resourcequota-pod.yml
Error from server (Forbidden): error when creating "resourcequota-pod.yml": pods "nginx2" is forbidden: exceeded quota: quota, requested: pods=1, used: pods=1, limited: pods=1
报错,pod数量超过了

三、User访问案例 

参考资料 https://kubernetes.io/docs/reference/access-authn-authz/rbac/

3.1 创建k8s用户(User)

3.1.1  准备工作

[root@master1 ~]# mkdir rbac
[root@master1 ~]# cd rbac/

[root@master1 rbac]# cp /etc/kubernetes/ssl/ca-key.pem .
[root@master1 rbac]# cp /etc/kubernetes/ssl/ca.pem .

3.1.2  创建证书

# 创建user私钥
[root@master1 rbac]# openssl genrsa -out daniel.key 2048					

# 创建证书签署请求(subj中的CN为用户名,O为用户组)
[root@master1 rbac]# openssl req -new -key daniel.key -out daniel.csr -subj "/O=k8s/CN=daniel"

# 签署证书
[root@master1 rbac]# openssl  x509 -req -in daniel.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out daniel.crt -days 365
Signature ok
subject=/O=k8s/CN=daniel
Getting CA Private Key

[root@master1 rbac]# ls
ca-key.pem  ca.pem  ca.srl  daniel.crt  daniel.csr  daniel.key

3.1.3 创建用户配置文件 

过程复杂,直接用脚本

[root@master1 rbac]# vim create-rbac-user.sh

#设置集群连接
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.122.11:6443 \
  --kubeconfig=daniel-kubeconfig

#颁发证书
kubectl config set-credentials daniel \
  --client-key=daniel.key \
  --client-certificate=daniel.crt \
  --embed-certs=true \
  --kubeconfig=daniel-kubeconfig

#设置安全上下文
kubectl config set-context daniel@kubernetes \
  --cluster=kubernetes \
  --user=daniel \
  --kubeconfig=daniel-kubeconfig

#切换安全上下文
kubectl config use-context daniel@kubernetes --kubeconfig=daniel-kubeconfig
#查看安全上下文
kubectl config view --kubeconfig=daniel-kubeconfig

执行脚本 

[root@master1 rbac]# sh create-rbac-user.sh
Cluster "kubernetes" set.
User "daniel" set.
Context "daniel@kubernetes" created.
Switched to context "daniel@kubernetes".
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.122.11:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: daniel
  name: daniel@kubernetes
- context:
    cluster: kubernetes
    user: daniel
  name: default
current-context: daniel@kubernetes
kind: Config
preferences: {}
users:
- name: daniel
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

查看kubeconfig文件 

[root@master1 ~]# cat /root/rbac/daniel-kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1RENDQXFDZ0F3SUJBZ0lVRXJCMXhTc2NDMUV3V2lkckc3UEJXT29HSkFFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qQXhNVEEwTVRNek5qQXdXaGdQTWpFeU1ERXdNVEV4TXpNMk1EQmFNR0V4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEREQUtCZ05WQkFvVApBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXpZNWhMTDhTYXU2MGVpNDR6RTlhNVdDdHVCc00KN0tmMXJkd2RnVVo2VFpRK3VFNkdhdmN5SWhCT3l2bFprVEt0WmhEcXdmSml3TjZBVk5tdTEvTktrVkZ2UEUrOQppOVF4Z0FkM0JsUk1FVi9nMUVrejZVRXpSM2JJSXdWd2QrUHRpZGZ3a1RtZ3E5bkR4MklpR3VMT1A4UnJoNmdFCmlkR1dLbERjZ0wrYnB6RUVieW4yTWpDWWY0dlZCUDlqS044WFdrM3pKdGlyMUhxOWx2akRXUEZrZStHMUZHdXEKKy9acy9jQk1Nb3JpSWhld1ZDb2t1ZGVVOVVPbUZSN0JObDFGOStBeWhWZHlmbE0yTFo1R2dQZmg2MFdBZkI2WApSa3o2dHMxcTJLR0FZdG5TeGhsblREamxpUWxWT0M2UURUeTBMNUgzV3lmdVdrOHlzWmVUNkxHQ3R3SURBUUFCCm8yWXdaREFPQmdOVkhROEJBZjhFQkFNQ0FRWXdFZ1lEVlIwVEFRSC9CQWd3QmdFQi93SUJBakFkQmdOVkhRNEUKRmdRVVRMNHErYkhUV2pMdmlVRk5BVmp4c1dTTDZuMHdId1lEVlIwakJCZ3dGb0FVVEw0cStiSFRXakx2aVVGTgpBVmp4c1dTTDZuMHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ2pZTXVLN2dDWlJmUWVPTW1SYVg5dmNjTldECjdOOVltb25XTHNZb0RiaGE2Und1VFlFVDRBWXJuUWE1d3pEV3pBK0Erb0liSjR4WjZnREZGK2s2KzM0V1ZreXcKLyt2NkFVL2hSNytjL2pIc3RGZGFRVXZGVHFhNUtmelcyOEZHQWptQ3ZOYyt1QjJ4ZjQvcy9EMnRwTWxpT3l6cwpKWEhNV3I3UWs5RHR3THJlZ2lZTlc3a202M25RbllWSTNRQzlIK0xNaVNsMHNIUlBER1VseU15N2hqWUNQR2duCkNEQlBWYWJhR0ltdW5YSnBGd2RKTUhmbFRuZllSUnQ3RG0zOVlXMjhWSGw4N0xlZXY5Y3dlVTI5KzNtYlJiUmwKME1tNlpoQzlsK0NjSXN6dEtUVWJBQ01LT1dYbUZXRk1uQVFZdnhxN2hIR25IaXpWb3k1VHNxVkZQRUE9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.122.11:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: daniel
  name: daniel@kubernetes
- context:
    cluster: kubernetes
    user: daniel
  name: default
current-context: daniel@kubernetes
kind: Config
preferences: {}
users:
- name: daniel
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvRENDQWVRQ0NRQzU2Z3poZFcrb1pUQU5CZ2txaGtpRzl3MEJBUXNGQURCaE1Rc3dDUVlEVlFRR0V3SkQKVGpFUk1BOEdBMVVFQ0JNSVNHRnVaMXBvYjNVeEN6QUpCZ05WQkFjVEFsaFRNUXd3Q2dZRFZRUUtFd05yT0hNeApEekFOQmdOVkJBc1RCbE41YzNSbGJURVRNQkVHQTFVRUF4TUthM1ZpWlhKdVpYUmxjekFlRncweU1ERXhNRGt4Ck16UTFNemRhRncweU1URXhNRGt4TXpRMU16ZGFNQjh4RERBS0JnTlZCQW9NQTJzNGN6RVBNQTBHQTFVRUF3d0cKWkdGdWFXVnNNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJ5SFEvekhveUtNMQo1VzVwM3dLTVBlcjNrbXZET0pTMFhmQVJRTUVYYTVkVmtvVForNmN1M1A2ay9aZTNwNlIydzZIczAvSzlScEE5ClprMGIzb1NGbzlsODlwWTZ5ODQyN1FVN0lHSzJIdUMrMkZ4Vys1c1ZsMFhqczRNUWNxWDgwVTh0R3dadTZBUmoKc1VkcmVwSDNxMmpTbFpEbDNpME1ockFaSmR6dk1XU1I5cnVzMjFhbzYzaUtXL291dFJqaEt0cDhoV1QvbFBiSgpnQkYyV3E1b2RsZ1ZoY2YxT0xBNGQ1L2JTUXUwMkQwdHFmM21RamVNN0FSYWJGdG8wcUhpS2lQNjBINkI4OEg2Ck1KYXdFWnp2bUt2UXc4YjhMVWxmNkpybkM1WGc4TDA2WHNEQmJ6MFRxQzFkeHFjY2pVaXRiUEQwN3ZKemFUZDEKK1p4U2JnYW1nUUlEQVFBQk1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ212YmgvUnMreVJ0S2JUSTVQQUE1UApKSER6MEJCNnJ4ZUxTTTYxcndJaFY0MXZjNE1iYkVKSW1yMm4vQU84Q3BjOUFlaExXeWExQkozaUswcW54bWYxCnhPT1JjYlgrb3JJQ1lUWGpPV3ZzbGlxSmlwcGRsd0syOEE0YkVGaUZIemVwcENiYm1JUHFnSzYva3pGTWl5ZEsKK0Y5ZVJnVjhTYk91aUV2MXRvWUxGOC9pMmpkeW1BWkszZUMrNnRuMFQ3UzRLdmxjSFYycFpqa051ZTRTVzNCYwpJRHdaKzVyMlkybTMrZmY2TVc1QUhFcnRubG5ETlFKeXRWRmFGTWViaDNlZlZENk1Xa0psVG1kbGlpaUxKVzYrCklmcXhxTEo5QWNBTTZ2Mk1Belh2RHRBWjF4RFNSSUhtamk3d1A1RmYxdGdFeVB3a3N5NWs1UCtYKytadUVPTlcKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMnlIUS96SG95S00xNVc1cDN3S01QZXIza212RE9KUzBYZkFSUU1FWGE1ZFZrb1RaCis2Y3UzUDZrL1plM3A2UjJ3NkhzMC9LOVJwQTlaazBiM29TRm85bDg5cFk2eTg0MjdRVTdJR0sySHVDKzJGeFcKKzVzVmwwWGpzNE1RY3FYODBVOHRHd1p1NkFSanNVZHJlcEgzcTJqU2xaRGwzaTBNaHJBWkpkenZNV1NSOXJ1cwoyMWFvNjNpS1cvb3V0UmpoS3RwOGhXVC9sUGJKZ0JGMldxNW9kbGdWaGNmMU9MQTRkNS9iU1F1MDJEMHRxZjNtClFqZU03QVJhYkZ0bzBxSGlLaVA2MEg2Qjg4SDZNSmF3RVp6dm1LdlF3OGI4TFVsZjZKcm5DNVhnOEwwNlhzREIKYnowVHFDMWR4cWNjalVpdGJQRDA3dkp6YVRkMStaeFNiZ2FtZ1FJREFRQUJBb0lCQUM0Rm1DTkFsQ2N2UCs2RApsZkNrVkZmY3RVb25IOFVPVmlNMGZDc2hZbzNTSnNLeTdDVVNJNkYrU2FDSThLVmd3U0N0TFZRR013SlM0TkpQCmQzN3dUakR2U0hOeGNiL0QrMzBrRUJycElGNnp6T0ViNDVqTUpXaXo0UEtlb0NHMEFyamk3QkkwODYzKzNnRXAKZ3NVblN1RFJEd2tpZVdTRUNaRndBcGdWSXBjNHFHbG1hV1l0U2d4VWZCMDJnZDNSSkMxMlhUK3hjeG9TU24xWQpPaFl0WGhOaEtXaWF2b2FUQ1JzV0hrTU9DaFVVdFI1eUtoSUZxTGFYb2llMXBhbDhDb3o1SkJkMXppbVJqVUIyCjRSSGR2K01Xemt0SlMyU1RSYWpKYzJ3TWozT3QwR3FsckJJcDh0TksrYjVFQzNnRitXNytRa2wzdE9IRUVWNk8KWWtGdUpCRUNnWUVBOTVXbGV1cnhxQ0JuQ1VZeWxIRm1KcGlPeHgrVENKQnEzems3b1dzMmdVRUFXUFNBUG9ETgpaS0JKSXJOekxnTUJmY2RjMC9MN0VRQnhaVmsvQnpCMjloZks5bC9CMVJxSkNqSGN6bzVJbEJwVUlOK1ZoZHduCjlZTXhnNDllYTNCTnJDd2crYm5QRkRlWFVnU0EzQzB3N3VCSGF0ZTNyMURjVThkM3dUYnV5ZmNDZ1lFQTRwU1gKWHQwbWJsbVEzUlVpd1lvK3RQRjZBZW41TWN6Zko2azJRc1pwVVErMlI4NUlaa1ZKSVJta0VaMmdCUWJOMWhFUQo5SGtOYWc5dFRHM044UU01V3NGbzJCYXViMmlFTEwxbk1vUjNxZEZrMUdHaVdOSW80MTFScHU5MU43OXRMRWhpCk5BaWtjQ0RNajBUbjh1MUZlS3cyV1YybGUySEVwUUxueFJMSHRVY0NnWUVBN3JuRWpmanNlb2lEMFZWMmY5dTMKKzQ4dVhLSDlURXkwUC9rWG5sV2FnSmNFbHNGcXh0bTdNa083UVZKMnhPN3hDUzZxTzFRVzhoWFRRVDRFb0JJRApJdGRlUktHZ2JMTGhZSXNqdGFkTUpFSWV6SFBKNFFDZkp3UVJRaUdEa1dULy8vdXM4QjVScGRUVWU2a0w5MEU3CjVaa3NROEVzeEViYnE0eXgxQVA3eTVFQ2dZQis0dEdtS1ZpbFFYa3pwZXR4MFJadXhEQWNORWYrRUlvRHJ1b1oKR1AzUGpHbjY2bFYzR0dQc3FteE1CU2FneWYrcFdxTlJGUjFqQU1Lbm8zSkoveEUyZHN0Yy84NnR5eTBCMVF1ZQpxYnBObWxySjZPYytzR3AvY1RvUnIxWElZMllpLzJJblc1T2l5MnRpN0Nhc3p4MHhoOWRqMG1LaXBzeTF6OXUxCmJGLzBid0tCZ0FmZm96OXNMeXpIK0pvT3NsUnNLaEhWTHhmSXoxUXZkT3FtY0hQN0paS0hqMFpyeGJsajlzYVoKMjROK0x1ZFZpRjl0V3VxOUYxVk1LbXY4K3pITW91d3JwcHduN1JEQm93dnk0M3lnYkZDNXZrd2dkMXdIcmcwOApycEMza0NLWjJlbERxSVZoZlYxaUlHSDhvQmlqSVJiUEFlTGtkRksveWFwS09Rd2pZNkM4Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

3.2 创建系统用户 

[root@master1 rbac]# useradd daniel
[root@master1 rbac]# mkdir -p /home/daniel/.kube
[root@master1 rbac]# cp /root/rbac/daniel-kubeconfig /home/daniel/.kube/config
[root@master1 rbac]# chown daniel.daniel -R /home/daniel/
[root@master1 rbac]# su - daniel

说明: 切换登录用户时, 就使用了/home/daniel/.kube/config文件。

3.3 验证用户访问k8s资源

可以看到daniel用户对pod,svc,ns等资源都没有权限

[daniel@master1 ~]$ kubectl get pod
Error from server (Forbidden): pods is forbidden: User "daniel" cannot list resource "pods" in API group "" in the namespace "default"

[daniel@master1 ~]$ kubectl get svc
Error from server (Forbidden): services is forbidden: User "daniel" cannot list resource "services" in API group "" in the namespace "default"

[daniel@master1 ~]$ kubectl get ns
Error from server (Forbidden): namespaces is forbidden: User "daniel" cannot list resource "namespaces" in API group "" at the cluster scope

3.4 创建role 

[root@master1 rbac]# vim role-pods-read.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: role-pods-reader
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch

说明: 此role是表示对pods资源有读的相关权限(get,list,watch)

[root@master1 rbac]# kubectl apply -f role-pods-read.yaml
role.rbac.authorization.k8s.io/role-pods-reader created
[root@master1 rbac]# kubectl get role |grep role-pods-reader
role-pods-reader                        2020-11-09T14:36:36Z

3.5 创建rolebinding 

[root@master1 rbac]# vim rolebinding-pods-read.yml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rolebinding-pods-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: role-pods-reader				   # 对应上一步的role名
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: daniel								# 与前面创建的k8s用户一致
[root@master1 rbac]# kubectl apply -f rolebinding-pods-read.yml
rolebinding.rbac.authorization.k8s.io/rolebinding-pods-reader created
[root@master1 rbac]# kubectl get rolebinding |grep rolebinding-pods-reader
rolebinding-pods-reader                 Role/role-pods-reader                        98s

说明: 这样系统用户daniel与k8s用户daniel通过这个rolebindingrole里定义的权限都联系起来了

3.6 再验证用户访问k8s资源 

[root@master1 rbac]# su - daniel

[daniel@master1 ~]$ kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
deploy-nginx-nfs-6f9bc4546c-dw64l         1/1     Running   1          26h
deploy-nginx-nfs-6f9bc4546c-fv4wl         1/1     Running   1          26h
nfs-client-provisioner-5b5ddcd6c8-d7sjw   1/1     Running   1          26h
pod-cm1                                   1/1     Running   7          2d2h
pod-cm2                                   1/1     Running   7          2d2h
pod-mysql-secret1                         1/1     Running   1          27h
pod-mysql-secret2                         1/1     Running   1          27h
web-0                                     1/1     Running   1          26h
web-1                                     1/1     Running   1          26h
web-2                                     1/1     Running   1          26h

[daniel@master1 ~]$ kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "daniel" cannot list resource "pods" in API group "" in the namespace "kube-system"


[daniel@master1 ~]$ kubectl get svc
Error from server (Forbidden): services is forbidden: User "daniel" cannot list resource "services" in API group "" in the namespace "default"

[daniel@master1 ~]$ kubectl get ns
Error from server (Forbidden): namespaces is forbidden: User "daniel" cannot list resource "namespaces" in API group "" at the cluster scope

说明: 由上可知,daniel 用户终于可以访问default命名空间里的pods资源了,但其它的还没有权限访问 

3.7 创建ClusterRole

[root@master1 rbac]# vim clusterrole-pods-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: clusterrole-pods-reader
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
[root@master1 rbac]# kubectl apply -f clusterrole-pods-reader.yaml
clusterrole.rbac.authorization.k8s.io/clusterrole-pods-reader created
[root@master1 rbac]# kubectl get clusterrole |grep pods-reader
clusterrole-pods-reader                                                2022-08-06T14:55:30Z

3.8 创建ClusterRoleBinding 

[root@master1 rbac]# vim clusterrolebinding-pods-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: clusterrolebinding-pods-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: clusterrole-pods-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: daniel
[root@master1 rbac]# kubectl apply -f clusterrolebinding-pods-reader.yaml
clusterrolebinding.rbac.authorization.k8s.io/clusterrolebinding-pods-reader created
[root@master1 rbac]# kubectl get clusterrolebinding |grep pods-reader
clusterrolebinding-pods-reader           ClusterRole/clusterrole-pods-reader               27s

3.9 再验证用户访问k8s资源 

[root@master1 rbac]# su - daniel

[daniel@master1 ~]$ kubectl get pod --all-namespaces

现在所有命名空间的pods都可以访问了

问题: 访问svc等其它资源如何操作?

可参考查看admin这个clusterrole

[root@master1 rbac]# kubectl get clusterrole admin -o yaml

四、ServiceAccount访问案例 

4.1 创建namespace

[root@master1 ~]# kubectl create namespace sa
namespace/sa created

4.2 验证namespace的sa与secret 

创建namespace默认会创建一个serviceaccount和一个secret

[root@master1 ~]# kubectl get sa -n sa
NAME      SECRETS   AGE
default   1         37s

[root@master1 ~]# kubectl get secret -n sa
NAME                  TYPE                                  DATA   AGE
default-token-dtbj5   kubernetes.io/service-account-token   3      49s
[root@master1 ~]# kubectl describe sa default -n sa
Name:                default
Namespace:           sa
Labels:              
Annotations:         
Image pull secrets:  
Mountable secrets:   default-token-dtbj5
Tokens:              default-token-dtbj5
Events:              

[root@master1 ~]# kubectl describe secret default-token-dtbj5 -n sa
Name:         default-token-dtbj5
Namespace:    sa
Labels:       
Annotations:  kubernetes.io/service-account.name: default
              kubernetes.io/service-account.uid: 8372360a-2b0e-41cd-9023-90514699573a

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  2 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InA0ZmVpcVJFUEJVdzIxNF9CY0FEZGljQUFiLUl3S21VTGI5QkpiLXZocDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLWR0Ymo1Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4MzcyMzYwYS0yYjBlLTQxY2QtOTAyMy05MDUxNDY5OTU3M2EiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c2E6ZGVmYXVsdCJ9.FEeJ74ZDXlEMqnWaU0UlenWffN2T35RvT011p8ZX7r12k5yffn-19tZQvYDJnErB_N7XryON9FfxtcA91dkbvZA9RWNZR4eBwrePZk6jaqjdMFK12LGcffR13kTh733L2394QF47qshQAsyKAABs8SyR-d6_BiLfxNUbnefuGxWGG3nEiMGuayoyerB32Xt4bBIMo8k6JZD4JNqCb1Bzq3wdCmkSIRrzfxVSEo-eW72rsw-LkcKGdwZcmRqV-FZuTA8TP4pCnJRbuDPx9Td9M3zz_hWZ5B0FW1XkJo1wiuTrIrMj-p5Dt4aEoj0y3eNDJNdV094EgBIMqy89emOb2A

4.3 创建pod验证其sa与secret 

[root@master1 ~]# kubectl run nginx --image=nginx:1.15-alpine -n sa
pod/nginx created
[root@master1 ~]# kubectl describe pod nginx -n sa
......
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dtbj5 (ro)
......
Volumes:
  default-token-dtbj5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-dtbj5

说明: 创建pod不指定serviceaccount则默认使用default.

4.4 创建一个sa 

[root@master1 ~]# kubectl create sa daniel -n sa
serviceaccount/daniel created
[root@master1 ~]# kubectl get sa -n sa
NAME      SECRETS   AGE
daniel    1         12s
default   1         13m
[root@master1 ~]# kubectl get secret -n sa
NAME                  TYPE                                  DATA   AGE
daniel-token-pjdmc    kubernetes.io/service-account-token   3      18s
default-token-dtbj5   kubernetes.io/service-account-token   3      13m

4.5 创建pod并自定义sa 

[root@master1 ~]# vim sa-pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: nginx2
  namespace: sa
spec:
  containers:
  - name: c1
    image: nginx:1.15-alpine
    ports:
    - name: httpd
      containerPort: 80
  serviceAccountName: daniel				# 指定sa为daniel

4.6 验证pod使用的sa 

[root@master1 ~]# kubectl apply -f sa-pod.yml
pod/nginx2 created
[root@master1 ~]# kubectl describe pod nginx2 -n sa
......
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from daniel-token-pjdmc (ro)
......
Volumes:
  daniel-token-pjdmc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  daniel-token-pjdmc
......

你可能感兴趣的:(k8s,linux,kubernetes,运维)