K8s简介
1.背景介绍
云计算飞速发展
- IaaS
- PaaS
- SaaS
Docker技术突飞猛进
- 一次构建,到处运行
- 容器的快速轻量
- 完整的生态环境
2.什么是kubernetes
首先,他是一个全新的基于容器技术的分布式架构领先方案。Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部:Borg)。在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等一系列完整功能,提高了大规模容器集群管理的便捷性。
Kubernetes是一个完备的分布式系统支撑平台,具有完备的集群管理能力,多扩多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和发现机制、內建智能负载均衡器、强大的故障发现和自我修复能力、服务滚动升级和在线扩容能力、可扩展的资源自动调度机制以及多粒度的资源配额管理能力。同时Kubernetes提供完善的管理工具,涵盖了包括开发、部署测试、运维监控在内的各个环节。
Kubernetes中,Service是分布式集群架构的核心,一个Service对象拥有如下关键特征:
• 拥有一个唯一指定的名字
• 拥有一个虚拟IP(Cluster IP、Service IP、或VIP)和端口号
• 能够体统某种远程服务能力
• 被映射到了提供这种服务能力的一组容器应用上
Service的服务进程目前都是基于Socket通信方式对外提供服务,比如Redis、Memcache、MySQL、Web Server,或者是实现了某个具体业务的一个特定的TCP Server进程,虽然一个Service通常由多个相关的服务进程来提供服务,每个服务进程都有一个独立的Endpoint(IP+Port)访问点,但Kubernetes能够让我们通过服务连接到指定的Service上。有了Kubernetes内奸的透明负载均衡和故障恢复机制,不管后端有多少服务进程,也不管某个服务进程是否会由于发生故障而重新部署到其他机器,都不会影响我们队服务的正常调用,更重要的是这个Service本身一旦创建就不会发生变化,意味着在Kubernetes集群中,我们不用为了服务的IP地址的变化问题而头疼了。
容器提供了强大的隔离功能,所有有必要把为Service提供服务的这组进程放入容器中进行隔离。为此,Kubernetes设计了Pod对象,将每个服务进程包装到相对应的Pod中,使其成为Pod中运行的一个容器。为了建立Service与Pod间的关联管理,Kubernetes给每个Pod贴上一个标签Label,比如运行MySQL的Pod贴上name=mysql标签,给运行PHP的Pod贴上name=php标签,然后给相应的Service定义标签选择器Label Selector,这样就能巧妙的解决了Service于Pod的关联问题。
在集群管理方面,Kubernetes将集群中的机器划分为一个Master节点和一群工作节点Node,其中,在Master节点运行着集群管理相关的一组进程kube-apiserver、kube-controller-manager和kube-scheduler,这些进程实现了整个集群的资源管理、Pod调度、弹性伸缩、安全控制、系统监控和纠错等管理能力,并且都是全自动完成的。Node作为集群中的工作节点,运行真正的应用程序,在Node上Kubernetes管理的最小运行单元是Pod。Node上运行着Kubernetes的kubelet、kube-proxy服务进程,这些服务进程负责Pod的创建、启动、监控、重启、销毁以及实现软件模式的负载均衡器。
在Kubernetes集群中,它解决了传统IT系统中服务扩容和升级的两大难题。你只需为需要扩容的Service关联的Pod创建一个Replication Controller简称(RC),则该Service的扩容及后续的升级等问题将迎刃而解。在一个RC定义文件中包括以下3个关键信息。
• 目标Pod的定义
• 目标Pod需要运行的副本数量(Replicas)
• 要监控的目标Pod标签(Label)
在创建好RC后,Kubernetes会通过RC中定义的的Label筛选出对应Pod实例并实时监控其状态和数量,如果实例数量少于定义的副本数量,则会根据RC中定义的Pod模板来创建一个新的Pod,然后将新Pod调度到合适的Node上启动运行,知道Pod实例的数量达到预定目标,这个过程完全是自动化。
Kubernetes优势:
- 容器编排
- 轻量级
- 开源
- 弹性伸缩
- 负载均衡
•Kubernetes的核心概念
1.Master
k8s集群的管理节点,负责管理集群,提供集群的资源数据访问入口。拥有Etcd存储服务(可选),运行Api Server进程,Controller Manager服务进程及Scheduler服务进程,关联工作节点Node。Kubernetes API server提供HTTP Rest接口的关键服务进程,是Kubernetes里所有资源的增、删、改、查等操作的唯一入口。也是集群控制的入口进程;Kubernetes Controller Manager是Kubernetes所有资源对象的自动化控制中心;Kubernetes Schedule是负责资源调度(Pod调度)的进程
2.Node
Node是Kubernetes集群架构中运行Pod的服务节点(亦叫agent或minion)。Node是Kubernetes集群操作的单元,用来承载被分配Pod的运行,是Pod运行的宿主机。关联Master管理节点,拥有名称和IP、系统资源信息。运行docker eninge服务,守护进程kunelet及负载均衡器kube-proxy.
• 每个Node节点都运行着以下一组关键进程
• kubelet:负责对Pod对于的容器的创建、启停等任务
• kube-proxy:实现Kubernetes Service的通信与负载均衡机制的重要组件
• Docker Engine(Docker):Docker引擎,负责本机容器的创建和管理工作
Node节点可以在运行期间动态增加到Kubernetes集群中,默认情况下,kubelet会想master注册自己,这也是Kubernetes推荐的Node管理方式,kubelet进程会定时向Master汇报自身情报,如操作系统、Docker版本、CPU和内存,以及有哪些Pod在运行等等,这样Master可以获知每个Node节点的资源使用情况,冰实现高效均衡的资源调度策略。、
3.Pod
运行于Node节点上,若干相关容器的组合。Pod内包含的容器运行在同一宿主机上,使用相同的网络命名空间、IP地址和端口,能够通过localhost进行通。Pod是Kurbernetes进行创建、调度和管理的最小单位,它提供了比容器更高层次的抽象,使得部署和管理更加灵活。一个Pod可以包含一个容器或者多个相关容器。
Pod其实有两种类型:普通Pod和静态Pod,后者比较特殊,它并不存在Kubernetes的etcd存储中,而是存放在某个具体的Node上的一个具体文件中,并且只在此Node上启动。普通Pod一旦被创建,就会被放入etcd存储中,随后会被Kubernetes Master调度到摸个具体的Node上进行绑定,随后该Pod被对应的Node上的kubelet进程实例化成一组相关的Docker容器冰启动起来,在。在默认情况下,当Pod里的某个容器停止时,Kubernetes会自动检测到这个问起并且重启这个Pod(重启Pod里的所有容器),如果Pod所在的Node宕机,则会将这个Node上的所有Pod重新调度到其他节点上。
4.Replication Controller
Replication Controller用来管理Pod的副本,保证集群中存在指定数量的Pod副本。集群中副本的数量大于指定数量,则会停止指定数量之外的多余容器数量,反之,则会启动少于指定数量个数的容器,保证数量不变。Replication Controller是实现弹性伸缩、动态扩容和滚动升级的核心。
5.Service
Service定义了Pod的逻辑集合和访问该集合的策略,是真实服务的抽象。Service提供了一个统一的服务访问入口以及服务代理和发现机制,关联多个相同Label的Pod,用户不需要了解后台Pod是如何运行。
外部系统访问Service的问题
首先需要弄明白Kubernetes的三种IP这个问题
Node IP:Node节点的IP地址
Pod IP: Pod的IP地址
Cluster IP:Service的IP地址
首先,Node IP是Kubernetes集群中节点的物理网卡IP地址,所有属于这个网络的服务器之间都能通过这个网络直接通信。这也表明Kubernetes集群之外的节点访问Kubernetes集群之内的某个节点或者TCP/IP服务的时候,必须通过Node IP进行通信
其次,Pod IP是每个Pod的IP地址,他是Docker Engine根据docker0网桥的IP地址段进行分配的,通常是一个虚拟的二层网络。
最后Cluster IP是一个虚拟的IP,但更像是一个伪造的IP网络,原因有以下几点
• Cluster IP仅仅作用于Kubernetes Service这个对象,并由Kubernetes管理和分配P地址
• Cluster IP无法被ping,他没有一个“实体网络对象”来响应
• Cluster IP只能结合Service Port组成一个具体的通信端口,单独的Cluster IP不具备通信的基础,并且他们属于Kubernetes集群这样一个封闭的空间。
Kubernetes集群之内,Node IP网、Pod IP网于Cluster IP网之间的通信,采用的是Kubernetes自己设计的一种编程方式的特殊路由规则。
6.Label
Kubernetes中的任意API对象都是通过Label进行标识,Label的实质是一系列的Key/Value键值对,其中key于value由用户自己指定。Label可以附加在各种资源对象上,如Node、Pod、Service、RC等,一个资源对象可以定义任意数量的Label,同一个Label也可以被添加到任意数量的资源对象上去。Label是Replication Controller和Service运行的基础,二者通过Label来进行关联Node上运行的Pod。
我们可以通过给指定的资源对象捆绑一个或者多个不同的Label来实现多维度的资源分组管理功能,以便于灵活、方便的进行资源分配、调度、配置等管理工作。
一些常用的Label如下:
• 版本标签:"release":"stable","release":"canary"......
• 环境标签:"environment":"dev","environment":"qa","environment":"production"
• 架构标签:"tier":"frontend","tier":"backend","tier":"middleware"
• 分区标签:"partition":"customerA","partition":"customerB"
• 质量管控标签:"track":"daily","track":"weekly"
Label相当于我们熟悉的标签,给某个资源对象定义一个Label就相当于给它大了一个标签,随后可以通过Label Selector(标签选择器)查询和筛选拥有某些Label的资源对象,Kubernetes通过这种方式实现了类似SQL的简单又通用的对象查询机制。
Label Selector在Kubernetes中重要使用场景如下:
•
o kube-Controller进程通过资源对象RC上定义Label Selector来筛选要监控的Pod副本的数量,从而实现副本数量始终符合预期设定的全自动控制流程
o kube-proxy进程通过Service的Label Selector来选择对应的Pod,自动建立起每个Service岛对应Pod的请求转发路由表,从而实现Service的智能负载均衡
o 通过对某些Node定义特定的Label,并且在Pod定义文件中使用Nodeselector这种标签调度策略,kuber-scheduler进程可以实现Pod”定向调度“的特性
•Kubernetes架构和组件
•Kubernetes 组件:
Kubernetes Master控制组件,调度管理整个系统(集群),包含如下组件:
1.Kubernetes API Server
作为Kubernetes系统的入口,其封装了核心对象的增删改查操作,以RESTful API接口方式提供给外部客户和内部组件调用。维护的REST对象持久化到Etcd中存储。
2.Kubernetes Scheduler
为新建立的Pod进行节点(node)选择(即分配机器),负责集群的资源调度。组件抽离,可以方便替换成其他调度器。
3.Kubernetes Controller
负责执行各种控制器,目前已经提供了很多控制器来保证Kubernetes的正常运行。
4. Replication Controller
管理维护Replication Controller,关联Replication Controller和Pod,保证Replication Controller定义的副本数量与实际运行Pod数量一致。
5. Node Controller
管理维护Node,定期检查Node的健康状态,标识出(失效|未失效)的Node节点。
6. Namespace Controller
管理维护Namespace,定期清理无效的Namespace,包括Namesapce下的API对象,比如Pod、Service等。
7. Service Controller
管理维护Service,提供负载以及服务代理。
8.EndPoints Controller
管理维护Endpoints,关联Service和Pod,创建Endpoints为Service的后端,当Pod发生变化时,实时更新Endpoints。
9. Service Account Controller
管理维护Service Account,为每个Namespace创建默认的Service Account,同时为Service Account创建Service Account Secret。
10. Persistent Volume Controller
管理维护Persistent Volume和Persistent Volume Claim,为新的Persistent Volume Claim分配Persistent Volume进行绑定,为释放的Persistent Volume执行清理回收。
11. Daemon Set Controller
管理维护Daemon Set,负责创建Daemon Pod,保证指定的Node上正常的运行Daemon Pod。
12. Deployment Controller
管理维护Deployment,关联Deployment和Replication Controller,保证运行指定数量的Pod。当Deployment更新时,控制实现Replication Controller和 Pod的更新。
13.Job Controller
管理维护Job,为Jod创建一次性任务Pod,保证完成Job指定完成的任务数目
14. Pod Autoscaler Controller
实现Pod的自动伸缩,定时获取监控数据,进行策略匹配,当满足条件时执行Pod的伸缩动作。
•Kubernetes Node运行节点,运行管理业务容器,包含如下组件:
1.Kubelet
负责管控容器,Kubelet会从Kubernetes API Server接收Pod的创建请求,启动和停止容器,监控容器运行状态并汇报给Kubernetes API Server。
2.Kubernetes Proxy
负责为Pod创建代理服务,Kubernetes Proxy会从Kubernetes API Server获取所有的Service信息,并根据Service的信息创建代理服务,实现Service到Pod的请求路由和转发,从而实现Kubernetes层级的虚拟转发网络。
3.Docker
Node上需要运行容器服务
部署k8s
环境描述:
操作系统 IP地址 主机名 软件包列表
CentOS7.3-x86_64 192.168.200.200 Master Docker kubeadim
CentOS7.3-x86_64 192.168.200.201 Minion-1 Docker
CentOS7.3-x86_64 192.168.200.202 Minion-2 Docker
部署基础环境
1.1 安装 Docker-CE
1.查看master系统信息:
[root@master ~]# hostname
master
[root@master ~]# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core)
[root@master ~]# uname -r
3.10.0-514.el7.x86_64
2.查看minion系统信息:
[root@master ~]# hostname
master
[root@master ~]# cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)
[root@master ~]# uname -r
3.10.0-862.el7.x86_64
3.安装依赖包:
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
4.设置阿里云镜像源
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
5.安装 Docker-CE
[root@master ~]# yum install docker-ce -y
6.启动 Docker-CE
[root@master ~]# systemctl enable docker
[root@master ~]# systemctl start docker
1.2 安装 Kubeadm
1.5 关闭 SELinux
[root@master ~]# setenforce 0
1.6 配置转发参数
[root@master ~]# cat <
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
[root@master ~]# sysctl –system
#上述步骤minion端也需要做
主机正式安装 Kuberentes
2.1 初始化相关镜像
要初始化镜像,请运行以下命令:
[root@master ~]# kubeadm init --kubernetes-version=v1.11.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors='SystemVerification'
#上面的操作会产生下面命令,下面这条命令是将minion端加入master端的,在minion上执行:
kubeadm join 192.168.200.200:6443 --token uyicwj.akb6hgdryfo1dtij --discovery-token-ca-cert-hash sha256:f26b1a713f1b10adb1e22aa129b23ea266bde550a2570e2b460070a080b42e08
2.2 配置 kubectl 认证信息
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
2.3 安装 Flannel 网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
执行完成之后,我们可以运行一下命令,查看现在的节点信息
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 9m v1.11.3
2.4.Node 节点配置
1.执行上述master初始化生成的命令:
[root@minion ~]# kubeadm join 192.168.200.200:6443 --token uyicwj.akb6hgdryfo1dtij --discovery-token-ca-cert-hash sha256:f26b1a713f1b10adb1e22aa129b23ea266bde550a2570e2b460070a080b42e08
#执行完后没有报错说明成功了
2.导入后再在master端查看下:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 38m v1.11.3
minion Ready
到这里master和minion配置完成,但是还没有创建pod及管理pod的权限需要创建用户,并授权
2.5.创建nginx 的pod测试
1.创建nginx的pod
[root@master ~]# kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1 --dry-run=true
deployment.apps/nginx-deploy created (dry run)
#nginx-deploy:pod名称
#--image=nginx:1.14-alpine:什么镜像
#--port=80:暴露的端口号,默认也会暴露
#--replicas=1:创建pod数量
#--dry-run=tru:使用dry-run模式创建,类似于测试,并不会真正创建
2下面命令是真正创建pod,去掉--dry-run=true即可
[root@master ~]# kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1
deployment.apps/nginx-deploy created
3查看下pod:
[root@master ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deploy 1 1 1 1 4m
#DESIRED:期望创建的数量
#CURRENT:已经创建的数量
#UP-TO-DATE:更新的数量
#AVAILABLE:正在运行的数量
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deploy-5b595999-5p496 1/1 Running 0 6m
4查看pod运行的详细信息:
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deploy-5b595999-5p496 1/1 Running 0 7m 10.244.2.2 minion-2
5验证:
[root@minion-1 ~]# curl 10.244.2.2
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
#--port=80:服务的端口
#--protocol=TCP:使用的协议,默认就是TCP If you see this page, the nginx web server is successfully installed and
2查看服务信息:
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1
nginx ClusterIP 10.101.101.195
3测试:
[root@master ~]# curl 10.101.101.195Welcome to nginx!
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
#--replicas:创建多少个pod
#--restart:是否重启
2.测试用服务名称可以访问pod服务,并且用k8s带的dns可以自动将服务名解析成集群地址,这样即便pod重新创建也不会影响服务访问
/ # wget -O - -q http://nginx:80/
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
6..查看service状态:
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1
myapp ClusterIP 10.105.254.230
nginx ClusterIP 10.101.101.195
7修改myapp的服务配置
[root@master ~]# kubectl edit svc myapp
#Please edit the object below. Lines beginning with a '#' will be ignored,
#and an empty file will abort the edit. If an error occurs while saving this file will be
#reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-09-26T10:19:31Z
labels:
run: myapp
name: myapp
namespace: default
resourceVersion: "19523"
selfLink: /api/v1/namespaces/default/services/myapp
uid: a8a1b74a-c175-11e8-b2c9-000c2929855b
spec:
clusterIP: 10.105.254.230
ports:
3.4 编写yaml文件,通过yaml文件操作
1.指定pod输出yaml格式信息:
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 2h
myapp-848b5b879b-gd4ll 1/1 Running 0 2h
myapp-848b5b879b-jn5xt 1/1 Running 0 2h
myapp-848b5b879b-lhp74 1/1 Running 0 2h
myapp-ser-759b978dcf-d7fvg 1/1 Running 0 2h
myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 2h
nginx-deploy-5b595999-5wxpj 1/1 Running 1 19h
[root@master ~]# kubectl get pod myapp-848b5b879b-gd4ll -o yaml
apiVersion: v1
kind: Pod #类型
metadata: #元数据
creationTimestamp: 2018-09-27T03:24:31Z
generateName: myapp-848b5b879b-
labels:
pod-template-hash: "4046164356"
run: myapp
name: myapp-848b5b879b-gd4ll
namespace: default
ownerReferences:
(4)spec:定义用户期望的状态
(2)status:当前的状态,本字段由kubernetes自己维护,无需修改
3.可通过下面命令或许每个字段定义的方式,含义:
[root@master ~]# kubectl explain pods.apiVersion #获取帮助,kubectl explain关键字
KIND: Pod
VERSION: v1
FIELD: apiVersion
DESCRIPTION:
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
4.编写yaml文件
[root@master manifors]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-daemo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
#pod pod-daemo :pod pod名
#release=canary:标签类型和标签名 key=values
3.查看:
[root@master manifors]# kubectl get pods -l app --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod-daemo 2/2 Running 1 1h app=myapp,release=canary,tier=frontend
4.修改标签:
[root@master manifors]# kubectl label pod pod-daemo release=stable --overwrite
[root@master manifors]# kubectl get pods -l app --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod-daemo 2/2 Running 1 1h app=myapp,release=stable,tier=frontend
5.给node节点打标签,让pod只允许在指定标签的节点上
[root@master manifors]# kubectl label node minion-1 dsiktype=ssd
node/minion-1 labeled
[root@master manifors]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready master 1d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
minion-1 Ready
minion-2 Ready
6.修改pod文件:
[root@master manifors]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-daemo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
apiVersion: v1
kind: Pod
metadata:
name: pod-daemo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
9..Annotations资源注解的添加
[root@master manifors]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-daemo
namespace: default
labels:
app: myapp
tier: frontend
annotations:
minion/created-by: "cluster admin"
spec:
containers:
3.7 POD生命周期中的重要行为:
1.探测的简要介绍
初始化容器
容器探测:
Liveness:探测容器是否存活
Readliness:探测主容器是否可以提供服务
探针类型:
(1) exec
(2) httpGet
(3) tcpsocket
2.用exec探针实例:
[root@master manifors]# vim liveness-exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-exec
namespace: default
spec:
containers:
apiVersion: v1
kind: Pod
metadata:
name: liveness-httpget
namespace: default
spec:
containers:
测试滚动更新:
用补丁的方式扩容:
[root@master manifors]# kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}'
[root@master manifors]# kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 Error 0 2d
myapp-848b5b879b-gd4ll 1/1 Running 2 2d
myapp-848b5b879b-jn5xt 1/1 Running 2 2d
myapp-848b5b879b-lhp74 1/1 Running 2 2d
myapp-deploy-67f6f6b4dc-5wbvm 1/1 Running 0 20s
myapp-deploy-67f6f6b4dc-d2frs 1/1 Running 0 20s
myapp-deploy-67f6f6b4dc-gsndw 1/1 Running 0 30m
myapp-deploy-67f6f6b4dc-tvxvw 1/1 Running 0 30m
myapp-deploy-67f6f6b4dc-z7hlj 1/1 Running 0 30m
myapp-ser-759b978dcf-d7fvg 1/1 Running 2 2d
myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 2d
nginx-deploy-5b595999-5wxpj 1/1 Running 3 2d
readliness-httpget 1/1 Running 1 1d
用补丁的方式修改更新策略
[root@master manifors]# kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}'
[root@master manifors]# kubectl describe deployment myapp-deploy
Name: myapp-deploy
Namespace: default
CreationTimestamp: Sat, 29 Sep 2018 15:53:52 +0800
Labels: app=myapp
release=canary
Annotations: deployment.kubernetes.io/revision=2
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"myapp-deploy","namespace":"default"},"spec":{"replicas":3,"selector":{...
Selector: app=myapp,release=canary
Replicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 1 max surge
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
spec:
clusterIP: 10.97.97.97
type: ClusterIP
selector:
app: redis
role: ds
ports:
apiVersion: v1
kind: Service
metadata:
name: myapp
namespaec: default
spec:
selector:
app: myapp
release: canary
ports:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
[root@master manifors]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-myapp myapp.zhouhao.com 80 1m
[root@master manifors]# kubectl describe ingress
Name: ingress-myapp
Namespace: default
Address:
Default backend: default-http-backend:80 (
Rules:
Host Path Backends
myapp.zhouhao.com
myapp:80 (
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-myapp","namespace":"default"},"spec":{"rules":[{"host":"myapp.zhouhao.com","http":{"paths":[{"backend":{"serviceName":"myapp","servicePort":80},"path":null}]}}]}}
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
Normal CREATE 2m nginx-ingress-controller Ingress default/ingress-myapp
查看ingress-nginx的配置文件是否自动写入了内容,并在主机上解析域名验证:
Kuberneter存储卷:
5.1 本地存储持久化
apiVersion: v1
kind: Pod
metadata:
name: myapp-deploy
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:
apiVersion: v1
kind: Pod
metadata:
name: pod-vol-hostpath
namespace: default
spec:
containers:
5.2使用nfs做持久化存储
apiVersion: v1
kind: Pod
metadata:
name: pod-vol-nfs
namespace: default
spec:
containers:
4.编写yaml文件
[root@master volumes]# vim pv-daemon.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: master
accessModes: ["ReadWriteOnce","ReadWriteMany"]
capacity:
storage: 10Gi
[root@master volumes]# kubectl apply -f pv-daemon.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 2Gi RWO,ROX,RWX Retain Available 19s
pv002 5Gi RWO,ROX,RWX Retain Available 19s
pv003 20Gi RWO,ROX,RWX Retain Available 19s
pv004 10Gi RWO,ROX,RWX Retain Available 19s
pv005 10Gi RWO,RWX Retain Available 19s
5.定义pvc:
[root@master volumes]# vim pvc-daemon.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-vol-pvc
namespace: default
spec:
containers:
[root@master volumes]# kubectl apply -f pvc-daemon.yaml
persistentvolumeclaim/mypvc unchanged
pod/pod-vol-pvc created
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 2Gi RWO,ROX,RWX Retain Available 21m
pv002 5Gi RWO,ROX,RWX Retain Available 21m
pv003 20Gi RWO,ROX,RWX Retain Available 21m
pv004 10Gi RWO,ROX,RWX Retain Bound default/mypvc 21m
pv005 10Gi RWO,RWX Retain Available 21m
[root@master volumes]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound pv004 10Gi RWO,ROX,RWX 6m
5.4创建configmap:
配置容器化应用的方式:
1.自定义命令行参数:
Args:
2.把配置文件直接放进镜像;
3.环境变量
(1)cloud Native的应用程序一般可直接通过环境变量加载配置;
(2)通过entrypoint脚本来预处理变量为配置文件中配置信息;
4.存储卷
方式一:
1.命令行创建configmap:
[root@master volumes]# kubectl create configmap nginx --from-literal=nginx_port=8080 --from-literal=server_name=myapp.zhouhao.com
configmap/nginx created
[root@master volumes]# kubectl get cm
NAME DATA AGE
nginx 2 8s
[root@master volumes]# kubectl describe cm nginx
Name: nginx
Namespace: default
Labels:
Annotations:
myapp.zhouhao.com
Events:
方式二:
1.创建出一个配置文件:
[root@master configmap]# vim www.conf
server {
server_name myapp.zhouhao.com;
listen 80;
root /data/web/html;
}
2.创建configmap:
[root@master configmap]# kubectl create configmap nginx-www --from-file=./www.conf
configmap/nginx-www created
[root@master configmap]# kubectl get cm
NAME DATA AGE
nginx 2 5m
nginx-www 1 8s
[root@master configmap]# kubectl describe cm nginx-www
Name: nginx-www
Namespace: default
Labels:
Annotations:
server {
server_name myapp.zhouhao.com;
listen 80;
root /data/web/html;
}
方式二是基于文件的
1.将上nginx里定义的变量应用到下面的pod中:
[root@master configmap]# vim pod-deploy.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-cm-1
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:
apiVersion: v1
kind: Pod
metadata:
name: pod-cm-2
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:
#Please edit the object below. Lines beginning with a '#' will be ignored,
#and an empty file will abort the edit. If an error occurs while saving this file will be
#reopened with the relevant failures.
#
apiVersion: v1
data:
nginx_port: "8080" #将8080修改成80
server_name: myapp.zhouhao.com
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-10T07:29:15Z
name: nginx
namespace: default
resourceVersion: "125157"
selfLink: /api/v1/namespaces/default/configmaps/nginx
uid: 30c4a9d7-cc5e-11e8-b4a9-000c2929855b
4.查看:
/etc/nginx/config.d # cat nginx_port
8080/etc/nginx/config.d #
发现没有改变,其实需要退出目录重新进入在查看:
8080/etc/nginx/config.d # cd ../
/etc/nginx # cd config.d/
/etc/nginx/config.d # cat nginx_port
80/etc/nginx/config.d #
发现改变了,这改变也是有一定时间的,以为这中间需要过程。
5.下面以上面nginx-www的cm为例,创建一个pod用里面内容做配置:
[root@master configmap]# vim pod-cm-3.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-cm-3
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:
user nginx;
worker_processes 1;
。。。。。。。。。。。。。
#configuration file /etc/nginx/conf.d/www.conf:
server {
server_name myapp.zhouhao.com;
listen 80;
root /data/web/html;
}
6.根据配置里面的信息,创建站点目录和测试然后访问:
/tc/nginx/conf.d # mkdir /data/web/html -p
/etc/nginx/conf.d # vi /data/web/html/index.html
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: master
accessModes: ["ReadWriteOnce","ReadWriteMany"]
capacity:
storage: 10Gi
[root@master volumes]# kubectl apply -f pv-daemon.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,ROX,RWX Retain Available 10s
pv002 5Gi RWO,ROX,RWX Retain Available 10s
pv003 5Gi RWO,ROX,RWX Retain Available 10s
pv004 10Gi RWO,ROX,RWX Retain Available 10s
pv005 10Gi RWO,RWX Retain Available 10s
2.创建statefulset控制器:
[root@master mandor]# vim statefulSet-daemon-yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:
Name: myapp-0.myapp.default.svc.cluster.local
Address 1: 10.244.2.33 myapp-0.myapp.default.svc.cluster.local
/ # nslookup myapp-1.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Name: myapp-1.myapp.default.svc.cluster.local
Address 1: 10.244.1.28 myapp-1.myapp.default.svc.cluster.local
/ # nslookup myapp-2.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Name: myapp-2.myapp.default.svc.cluster.local
Address 1: 10.244.2.34 myapp-2.myapp.default.svc.cluster.local
会发现都能解析出来pod的IP,
Pod名解析格式:
myapp-1.myapp.default.svc.cluster.local
pod名. Service名.命名空间名.后缀
5.7 Pod扩容pvc会自动创建匹配pv
1.进行扩容:
[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
[root@master mandor]# kubectl scale sts myapp --replicas=5
statefulset.apps/myapp scaled
[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
2.会发现会扩出3和4
[root@master mandor]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv005 10Gi RWO,RWX 46m
myappdata-myapp-1 Bound pv001 5Gi RWO,ROX,RWX 46m
myappdata-myapp-2 Bound pv003 5Gi RWO,ROX,RWX 46m
myappdata-myapp-3 Bound pv002 5Gi RWO,ROX,RWX 59s
myappdata-myapp-4 Bound pv004 10Gi RWO,ROX,RWX 57s
[root@master mandor]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-1 6h
pv002 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-3 6h
pv003 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-2 6h
pv004 10Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-4 6h
pv005 10Gi RWO,RWX Retain Bound default/myappdata-myapp-0 6h
pvc也会自动被创建,PV也会自动被匹配
5.8 Pod分区更新
sts支持分区更新,分区就是pod名后边的数字比如myapp-1,1就是分区,分区更新是定义一个区(数字),大于或等的会进行更新,比如定义4,大于或等于4的会更新,定义0就是全部更新,如下:
1.查看下默认更新策略:
[root@master mandor]# kubectl describe sts myapp
Name: myapp
Namespace: default
CreationTimestamp: Thu, 11 Oct 2018 16:58:31 +0800
Selector: app=myapp-pod
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"replicas":3,"selector":{"match...
Replicas: 5 desired | 5 total
Update Strategy: RollingUpdate #默认滚动更新,没设置分区
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
。。。。。。
2.定义分区:
[root@master mandor]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}'
statefulset.apps/myapp patched
上述是打补丁的方式,注意引号
[root@master mandor]# kubectl describe sts myapp
Name: myapp
Namespace: default
CreationTimestamp: Thu, 11 Oct 2018 16:58:31 +0800
Selector: app=myapp-pod
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"replicas":3,"selector":{"match...
Replicas: 5 desired | 5 total
Update Strategy: RollingUpdate
Partition: 4 #这有了分区值,大于等于4的会更新
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
3.开始更新测试
[root@master mandor]# kubectl set image sts myapp myapp=ikubernetes/myapp:v2
statefulset.apps/myapp image updated
[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
myapp-4 1/1 Terminating 0 24m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
会发现它先把4停了然后重新创建启动。
4.验证版本:
[root@master mandor]# kubectl describe pod myapp-4
。。。。。。。。。。。
Containers:
myapp:
Container ID: docker://bb8b5d4e73459dd39ad6abce52c72402a80dfbbc938fa7758766f3e377f845af
Image: ikubernetes/myapp:v2
Image ID: docker-pullable://ikubernetes/myapp@sha256:85a2b81a62f09a414ea33b74fb8aa686ed9b168294b26b4c819df0be0712d358
。。。。。。。
[root@master mandor]# kubectl describe pod myapp-2
Name: myapp-2
Namespace: default
Node: minion-1/192.168.200.201
Start Time: Thu, 11 Oct 2018 16:58:35 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-854b598c58
statefulset.kubernetes.io/pod-name=myapp-2
Annotations:
Status: Running
IP: 10.244.2.34
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://4fd66e973b1bb74be30b9d3ff9ceb9515a57197669389784e6e80449e788203d
Image: ikubernetes/myapp:v1
[root@master mandor]# kubectl describe pod myapp-0
Name: myapp-0
Namespace: default
Node: minion-1/192.168.200.201
Start Time: Thu, 11 Oct 2018 16:58:31 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-854b598c58
statefulset.kubernetes.io/pod-name=myapp-0
Annotations:
Status: Running
IP: 10.244.2.33
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://1886176fc8e698327497e15eb3e452e04092805fc3c11b71ea844d26e439ad86
Image: ikubernetes/myapp:v1
[root@master mandor]# kubectl describe pod myapp-3
Name: myapp-3
Namespace: default
Node: minion-2/192.168.200.202
Start Time: Thu, 11 Oct 2018 17:12:38 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-854b598c58
statefulset.kubernetes.io/pod-name=myapp-3
Annotations:
Status: Running
IP: 10.244.1.29
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://426436b9f8ea96c6e55e8af00790a1cec7b9620bb0a3843c0fc8df869106d86f
Image: ikubernetes/myapp:v1
5.通过上面发现只有4的版本更新了,如果想把所有的都更新,可以通过上面打补丁的方式,将数值改为0更新即可.
如下:
[root@master mandor]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}'
statefulset.apps/myapp patched
[root@master mandor]# kubectl set image sts myapp myapp=ikubernetes/myapp:v2
[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
myapp-4 1/1 Terminating 0 24m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
myapp-3 1/1 Terminating 0 41m
myapp-3 0/1 Terminating 0 41m
myapp-3 0/1 Terminating 0 41m
myapp-3 0/1 Terminating 0 41m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-2 1/1 Terminating 0 55m
myapp-2 0/1 Terminating 0 55m
myapp-2 0/1 Terminating 0 55m
myapp-2 0/1 Terminating 0 55m
myapp-2 0/1 Pending 0 0s
myapp-2 0/1 Pending 0 0s
myapp-2 0/1 ContainerCreating 0 0s
myapp-2 1/1 Running 0 3s
myapp-1 1/1 Terminating 0 55m
myapp-1 0/1 Terminating 0 55m
myapp-1 0/1 Terminating 0 55m
myapp-1 0/1 Terminating 0 55m
myapp-1 0/1 Pending 0 0s
myapp-1 0/1 Pending 0 0s
myapp-1 0/1 ContainerCreating 0 0s
myapp-1 1/1 Running 0 1s
myapp-0 1/1 Terminating 0 55m
myapp-0 0/1 Terminating 0 55m
myapp-0 0/1 Terminating 0 55m
myapp-0 0/1 Terminating 0 55m
myapp-0 0/1 Pending 0 0s
myapp-0 0/1 Pending 0 0s
myapp-0 0/1 ContainerCreating 0 0s
myapp-0 1/1 Running 0 2s
会从3开始.
[root@master mandor]# kubectl describe pod myapp-0
Name: myapp-0
Namespace: default
Node: minion-1/192.168.200.201
Start Time: Thu, 11 Oct 2018 17:54:24 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-58656f57bf
statefulset.kubernetes.io/pod-name=myapp-0
Annotations:
Status: Running
IP: 10.244.2.38
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://d59df10c758f1164a21b070cd4aa3783cb3a2c6aa32e90688e0575cacd069c86
Image: ikubernetes/myapp:v2
K8s RABC权限控制
6.1 创建用户并测试
1.K8s的sa账号创建:
[root@master mandor]# kubectl create serviceaccount admin
serviceaccount/admin created
[root@master mandor]# kubectl get sa
NAME SECRETS AGE
admin 1 9s
default 1 4d
[root@master mandor]# kubectl describe sa admin
Name: admin
Namespace: default
Labels:
Annotations:
Image pull secrets:
Mountable secrets: admin-token-v8p8k
Tokens: admin-token-v8p8k
Events:
2.创建私钥:
[root@master mandor]# (umask 077;openssl genrsa -out zhouhao.key 2048)
Generating RSA private key, 2048 bit long modulus
.................................................................+++
...................................................................................+++
e is 65537 (0x10001)
[root@master mandor]# openssl req -new -key zhouhao.key -out zhouhao.csr -subj "/CN=zhouhao"
[root@master pki]# openssl x509 -req -in zhouhao.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out zhouhao.crt -days 365
Signature ok
subject=/CN=zhouhao
Getting CA Private Key
[root@master pki]# openssl x509 -in zhouhao.crt -text -noout
Certificate:
Data:
Version: 1 (0x0)
Serial Number: 15289891927309345937 (0xd4309bb2d562e491)
Signature Algorithm: sha1WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Oct 12 10:14:41 2018 GMT
Not After : Oct 12 10:14:41 2019 GMT
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
3.创建用户
[root@master pki]# kubectl config set-credentials zhouhao --client-certificate=./zhouhao.crt --client-key=./zhouhao.key --embed-certs=true
User "zhouhao" set.
[root@master pki]# kubectl config view
apiVersion: v1
clusters:
6.3 创建一个角色并绑定用户:
1.用命令行生成yaml格式的文件在做修改:
[root@master pki]# kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: pods-reader
rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pods-reader
namespace: default
rules:
pods [] [] [get list watch]
3.创建rolebinding让用户绑定角色:
[root@master mandor]# kubectl create rolebinding zhouhao-read-pods --role=pods-reader --user=zhouhao
rolebinding.rbac.authorization.k8s.io/zhouhao-read-pods created
[root@master mandor]# kubectl create rolebinding zhouhao-read-pods --role=pods-reader --user=zhouhao --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: zhouhao-read-pods
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pods-reader
subjects:
User zhouhao
4.切换用户验证权限:
[root@master ~]# kubectl config use-context zhouhao@kubernetes
Switched to context "zhouhao@kubernetes".
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-deploy-67f6f6b4dc-pz4bd 1/1 Running 0 4h
myapp-deploy-67f6f6b4dc-smw9t 1/1 Running 0 4h
myapp-deploy-67f6f6b4dc-twgh6 1/1 Running 0 4h
5.只授权了default命名空间的权限所以查看其它空间的会报错;
[root@master ~]# kubectl get pods -n kube-system
No resources found.
Error from server (Forbidden): pods is forbidden: User "zhouhao" cannot list pods in the namespace "kube-system"
6.4通过clusterrole授权
1.创建clusterrole:
[root@master ~]# kubectl create clusterrole cluster-readers --verb=get,list,watch --resource=pods -o yaml --dry-run >clusterrole-yaml
[root@master ~]# vim clusterrole-yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-readers
rules:
4.绑定clusterrole:
[root@master ~]# kubectl create clusterrolebinding zhouhao-read-all-pods --clusterrole= cluster-readers --user=zhouhao --dry-run -o yaml>clusterrolebinding-demo.yaml
[root@master mandor]# vim ~/clusterrolebinding-demo.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: zhouhao-read-all-pods
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-readers
subjects:
[root@master mandor]# vim rolebinding-cluster.yaml
[root@master mandor]# kubectl create rolebinding zhouhao-read-pods --clusterrole=cluster-readers --user=zhouhao --dry-run -o yaml >rolebinding-cluster.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: zhouhao-read-pods
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-readers
subjects:
6.需要认证登录,将系统中config文件传到主机上
[root@master dashboard]# ls ~/.kube/
cache config http-cache
[root@master dashboard]# sz ~/.kube/config
然后在选中:
7.2 token方式登录dashboard
1.为dashboard创建证书和私钥:
[root@master dashboard]# cd /etc/kubernetes/pki/
[root@master pki]# (umask 077;openssl genrsa -out dashboard.key 2048)
Generating RSA private key, 2048 bit long modulus
...+++
..............+++
e is 65537 (0x10001)
[root@master pki]# openssl req -new -key dashboard.key -out dashboard.csr -subj "/O=zhouhao/CN=dashboard"
[root@master pki]# openssl x509 -req -in dashboard.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out dashboard.csr -days 365
Signature ok
subject=/O=zhouhao/CN=dashboard
Getting CA Private Key
[root@master pki]# kubectl create secret generic dashboard-cert -n kube-system --from-file=dashboard.crt=./dashboard.csr --from-file=dashboard.key=./dashboard.key
secret/dashboard-cert created
2.使用token方式登录:
[root@master pki]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@master pki]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created
3.获取token值:
[root@master pki]# kubectl describe secret dashboard-admin-token-d8mc4 -n kube-system
Name: dashboard-admin-token-d8mc4
Namespace: kube-system
Labels:
Annotations: kubernetes.io/service-account.name=dashboard-admin
kubernetes.io/service-account.uid=ab682221-d058-11e8-8f2d-000c2929855b
Type: kubernetes.io/service-account-token
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZDhtYzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYWI2ODIyMjEtZDA1OC0xMWU4LThmMmQtMDAwYzI5Mjk4NTViIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.jClu0HHKv81G7SSaxxAb_-i0cXhR1_BkAUqjxKgLjH98w_Z4OE_amhvZu93S4uYM4F3nDGfMgXp5Vt2i4vkS3pnLgO2wdcfzMr0--VzAPhywLR2BBGL9N0u9wokSH4znp1KFmmvPy8KdAjlXi_IMp7hcNrSYgGSnF9XBKWLo2JiMsE4YTA_mgLIml8rAIjw-5REyG9o4RPNL0VtBDO1Ny4NA7fpYWj-r_iKlsXHPvnX0Pe7AtzY62MPRXR0Q_VvEwbH32DiYl6ciXMJxQnPi6mxgHQRXk6luY-_EERGvo9pn3dBmJs_moPSsNjSIE7EP0F-W7tsUtcOEMX15L4e8Ow
×××部分即使taken值
4.Token登录:
5.选择token登录将值复制上去选择登录:
K8s网络及高级调度
8.1 管理flannel和calico
1.配置flannel网络插件:
[root@master ~]# vim kube-flannel.yml
。。。。。
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"Directrouting": true #找到上面内容添加这行改成直接路由模式,默认是false。
。。。。。。。。。。。。。。
或者
[root@master ~]# vim kube-flannel.yml
。。。。。
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "host-gw" #将vxlan改成host-gw也是一样,区别,这种形式节点不能跨网段,而上述可以跨网段。
。。。。。。。。。。。。。。
2.网络策略:
#支持rabc的话就做这个如果没有rabc可以跳过直接执行下一步
[root@master ~]# kubectl apply -f \
https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/rbac.yaml
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
3.安装部署calico
官网地址:https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/flannel
[root@master ~]# kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
[root@master ~]# kubectapply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/canal/canal.yaml
制定策略让不同命名空间的pod不能随意访问
4.创建两个命名空间:
[root@master networkpolicy]# kubectl create namespace dev
namespace/dev created
[root@master networkpolicy]# kubectl create namespace port
namespace/port created
5.创建pod
[root@master networkpolicy]# vim pod_a.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-1
spec:
containers:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: dev
spec:
podSelector: {}
ingress: #ingress入站规则
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: all-myapp-ingress
spec:
podSelector:
matchLabels:
app: myapp #匹配标签是app=myapp的
ingress: #定义入站规则,出站策略将ingress改成egress
[root@master namespace]# kubectl apply -f allow-myapp-ingress.yaml
networkpolicy.networking.k8s.io/all-myapp-ingress created
13.访问测试:
[root@master namespace]# curl 10.244.1.6
Hello MyApp | Version: v1 | Pod Name
[root@master namespace]# curl 10.244.1.6:443
#80可以访问到,443端口无法访问
[root@minion-2 ~]# curl 10.244.1.6
192.168.200.202地址的minion-2 80端口无法访问
8.2高级调度方式
1.节点选择器:nodeSelector, nodeName
2.节点亲和调度:nodeAffinity分为硬亲和和软亲和,硬亲和就是必须满足条件才能完成调度,软亲和就是,满足条件最好,不满足也可以调度。
实例:
8.2.1 通过node标签调度pod
1.使用nodeSelector
[root@master schedule]# vim pod-demon
apiVersion: v1
kind: Pod
metadata:
name: pod-demon
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/vreated-by: "cluster admin"
spec:
containers:
apiVersion: v1
kind: Pod
metadata:
name: pod-affinity-demo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
4.上述pod正常创建即可
apiVersion: v1
kind: Pod
metadata:
name: pod-second
labels:
app: db
tier: db
spec:
containers:
apiVersion: v1
kind: Pod
metadata:
name: pod-first
labels:
app: myapp
tier: frontend
spec:
containers:
apiVersion: v1
kind: Pod
metadata:
name: pod-second
labels:
app: db
tier: db
spec:
containers:
8.2.4污点调度
Taint的effect定义对Pod排斥效果:
NoSchedule:仅影响调度过程,对现存的pod对象不产生影响
NoExecute: 既影响调度过程,也影响现存的pod,对不容忍污点的pod将被驱逐
PreferNoSchedule:对于不能容忍污点的pod,如果pod实在没有节点被调度也可以运行在此节点上
1.运行deployment:
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-69b47bc96d-479xv 1/1 Running 0 9s 10.244.2.19 minion-2
myapp-deploy-69b47bc96d-dqg8h 1/1 Running 0 9s 10.244.2.20 minion-2
myapp-deploy-69b47bc96d-w8ksl 1/1 Running 0 9s 10.244.1.24 minion-1
会发现两个节点上都会运行,这时我们给两个节点都打上标签,看看效果
2.给minion-1打上污点
[root@master schedule]# kubectl taint node minion-1 node-type=prod:NoSchedule
node/minion-1 tainted
3.运行下deployment看效果
[root@master schedule]# kubectl apply -f pod-deployment.yaml
deployment.apps/myapp-deploy created
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-69b47bc96d-72d6p 1/1 Running 0 12s 10.244.2.21 minion-2
myapp-deploy-69b47bc96d-fmbj7 1/1 Running 0 12s 10.244.2.22 minion-2
myapp-deploy-69b47bc96d-v8h99 1/1 Running 0 12s 10.244.2.23 minion-2
全部调度minion-2上了,我们给minion-2打上污点,污点效果是pod不能容忍污点会被4.驱逐:
[root@master schedule]# kubectl taint node minion-2 node-type=dev:NoExecute
node/minion-2 tainted
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-69b47bc96d-2jz87 0/1 Pending 0 10s
myapp-deploy-69b47bc96d-8w9l4 0/1 Pending 0 10s
myapp-deploy-69b47bc96d-x4ccd 0/1 Pending 0 10s
会发现pod都被驱逐了,因为节点都有污点所以pod状态为Pending了。
5.给pod加上污点容忍度:
[root@master schedule]# vim pod-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
labels:
app: myapp
tier: frontend
spec:
containers:
apiVersion: apps/v1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: heapster
。。。。。。。。。。。。。。。。。。。。
spec:
ports:
9.2部署grafana图形化展示
1.将grafana的yaml文件下载到本地:
[root@master resources]# wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml
2.修改配置文件:
[root@master resources]# vim grafana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
。。。。。。。。。。。。。。。。。
ports:
9.3部署metrics-server
1将所有文件下载到本地:地址https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
[root@master resources]# git clone https://github.com/kubernetes-incubator/metrics-server.git
[root@master 1.8+]# cd /root/resources/metrics-server/deploy/1.8+
2.将目录下yaml文件中的镜像手动在节点上拉下来并打上配置文件里的标签
[root@minion-2 ~]# docker pull rancher/metrics-server-amd64:v0.3.1
v0.3.1: Pulling from rancher/metrics-server-amd64
8c5a7da1afbc: Pull complete
e2b7e44cc2bf: Pull complete
Digest: sha256:78938f933822856f443e6827fe5b37d6cc2f74ae888ac8b33d06fdbe5f8c658b
Status: Downloaded newer image for rancher/metrics-server-amd64:v0.3.1
[root@minion-2 ~]# docker tag rancher/metrics-server-amd64:v0.3.1 k8s.gcr.io/metrics-server-amd64:v0.3.1
[root@master 1.8+]# kubectl apply -f .
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader configured
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator configured
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
serviceaccount/metrics-server created
deployment.extensions/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master 1.8+]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
canal-8pxjq 3/3 Running 27 5d
canal-bfl74 3/3 Running 23 5d
canal-rtw55 3/3 Running 24 5d
coredns-78fcdf6894-kqjqt 1/1 Running 13 5d
coredns-78fcdf6894-w2c7j 1/1 Running 7 5d
etcd-master 1/1 Running 7 5d
kube-apiserver-master 1/1 Running 13 5d
kube-controller-manager-master 1/1 Running 12 5d
kube-flannel-ds-amd64-5wwdm 1/1 Running 10 5d
kube-flannel-ds-amd64-rhhx4 1/1 Running 13 5d
kube-flannel-ds-amd64-s9jlj 1/1 Running 1 23h
kube-proxy-j8lkl 1/1 Running 7 5d
kube-proxy-wf2ss 1/1 Running 7 5d
kube-proxy-xxdr4 1/1 Running 6 5d
kube-scheduler-master 1/1 Running 11 5d
metrics-server-5d78f796fd-wn79b 1/1 Running 0 23s
[root@master 1.8+]# kubectl top nodes
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
会发现还是用不了,
解决方法:
[root@master 1.8+]# vim metrics-server-deployment.yaml
#添加×××部分
containers:
[root@master prometheus]# kubectl get all -n prom
NAME READY STATUS RESTARTS AGE
pod/prometheus-node-exporter-5llld 1/1 Running 0 13m
pod/prometheus-node-exporter-lw7xv 1/1 Running 0 13m
pod/prometheus-node-exporter-qsbrs 1/1 Running 0 13m
pod/prometheus-server-7c8554cf-gkrs9 1/1 Running 0 3m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus NodePort 10.98.60.233
service/prometheus-node-exporter ClusterIP None
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-node-exporter 3 3 3 3 3
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-server 1 1 1 1 3m
NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-server-7c8554cf 1 1 1 3m
2.访问30090测试:
[root@master prometheus]# cd ../
[root@master k8s-prom]# cd kube-state-metrics/
[root@master kube-state-metrics]# ls
kube-state-metrics-deploy.yaml kube-state-metrics-svc.yaml
kube-state-metrics-rbac.yaml
4.在节点上把镜像拉下来
[root@minion-1 ~]# ./pull-google.sh gcr.io/google_containers/kube-state-metrics-amd64:v1.3.1
[root@master kube-state-metrics]# kubectl apply -f .
[root@master kube-state-metrics]# kubectl get all -n prom
NAME READY STATUS RESTARTS AGE
pod/kube-state-metrics-58dffdf67d-lvhtl 1/1 Running 0 1m
pod/prometheus-node-exporter-5llld 1/1 Running 0 45m
pod/prometheus-node-exporter-lw7xv 1/1 Running 0 45m
pod/prometheus-node-exporter-qsbrs 1/1 Running 0 45m
pod/prometheus-server-7c8554cf-gkrs9 1/1 Running 0 35m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-state-metrics ClusterIP 10.105.251.81
service/prometheus NodePort 10.98.60.233
service/prometheus-node-exporter ClusterIP None
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-node-exporter 3 3 3 3 3
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-state-metrics 1 1 1 1 1m
deployment.apps/prometheus-server 1 1 1 1 35m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-state-metrics-58dffdf67d 1 1 1 1m
replicaset.apps/prometheus-server-7c8554cf 1 1 1 35m
[root@master kube-state-metrics]# cd ../k8s-prometheus-adapter/
[root@master k8s-prometheus-adapter]# ls
custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml
custom-metrics-apiserver-auth-reader-role-binding.yaml
custom-metrics-apiserver-deployment.yaml
custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml
custom-metrics-apiserver-service-account.yaml
custom-metrics-apiserver-service.yaml
custom-metrics-apiservice.yaml
custom-metrics-cluster-role.yaml
custom-metrics-resource-reader-cluster-role.yaml
hpa-custom-metrics-cluster-role-binding.yaml
5.需要做证书认证:
[root@master ~]# cd /etc/kubernetes/pki/
[root@master pki]# (umask 077;openssl genrsa -out serving.key 2048)
Generating RSA private key, 2048 bit long modulus
......................................................................+++
...................+++
e is 65537 (0x10001)
[root@master pki]# openssl req -new -key serving.key -out serving.csr -subj "/CN=serving"
[root@master pki]# openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650
Signature ok
subject=/CN=serving
Getting CA Private Key
[root@master pki]# ls
apiserver.crt ca.crt front-proxy-client.key
apiserver-etcd-client.crt ca.key sa.key
apiserver-etcd-client.key etcd sa.pub
apiserver.key front-proxy-ca.crt serving.crt
apiserver-kubelet-client.crt front-proxy-ca.key serving.csr
apiserver-kubelet-client.key front-proxy-client.crt serving.key
[root@master pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key -n prom
secret/cm-adapter-serving-certs created
[root@master pki]# kubectl get secrets -n prom
NAME TYPE DATA AGE
cm-adapter-serving-certs Opaque 2 26s
default-token-svkpd kubernetes.io/service-account-token 3 1h
kube-state-metrics-token-47zdn kubernetes.io/service-account-token 3 25m
prometheus-token-brldq kubernetes.io/service-account-token 3 58m
[root@master k8s-prometheus-adapter]# kubectl apply -f .
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created
deployment.apps/custom-metrics-apiserver created
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created
serviceaccount/custom-metrics-apiserver created
service/custom-metrics-apiserver created
apiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created
clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created
clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created
[root@master k8s-prometheus-adapter]# mv custom-metrics-apiserver-deployment.yaml{,.bak}
6.下载新版的配置文件:
[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-apiserver-deployment.yaml
7.修改配置文件:
[root@master k8s-prometheus-adapter]# vim custom-metrics-apiserver-deployment.yaml
#×××部分的命名空间修改成自己定义的
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
namespace: prom
spec:
8.将confgmap拉下载:
[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-config-map.yaml
9.修改下里面的命名空间:
[root@master k8s-prometheus-adapter]# vim custom-metrics-config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: prom
[root@master k8s-prometheus-adapter]# kubectl apply -f custom-metrics-config-map.yaml
configmap/adapter-config created
[root@master k8s-prometheus-adapter]# kubectl apply -f custom-metrics-apiserver-deployment.yaml
deployment.apps/custom-metrics-apiserver created
[root@master k8s-prometheus-adapter]# kubectl get pod -n prom
NAME READY STATUS RESTARTS AGE
custom-metrics-apiserver-65f545496-srtdr 1/1 Running 0 16s
kube-state-metrics-58dffdf67d-lvhtl 1/1 Running 0 1h
prometheus-node-exporter-5llld 1/1 Running 0 1h
prometheus-node-exporter-lw7xv 1/1 Running 0 1h
prometheus-node-exporter-qsbrs 1/1 Running 0 1h
prometheus-server-7c8554cf-gkrs9 1/1 Running 0 1h
[root@master k8s-prometheus-adapter]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
crd.projectcalico.org/v1
custom.metrics.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
10.配置grafana,修改配置文件命名空间修改成prom
[root@master resources]# vim grafana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: prom
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
value: /
volumes:
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: prom
spec:
#In a production setup, we recommend accessing Grafana through an external Loadbalancer
#or through a public IP.
#type: LoadBalancer
#You could also use NodePort to expose the service at a randomly-generated port
#type: NodePort
ports:
9.4 K8s自动扩容
1.主动扩容:
[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi' --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80
service/myapp created
deployment.apps/myapp created
2.命令行配置:
[root@master ~]# kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=50
kubectl autoscale:关键字
deployment :类型type,这里是deployment
myapp:名称
--min:最少多少个
--max:最多多少个
--cpu-percent:CPU的阈值百分比,这里50就是50%
[root@master ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp
3.压测测试:
[root@master ~]# kubectl patch svc myapp -p '{"spec":{"type":"NodePort"}}'
service/myapp patched
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1
myapp NodePort 10.97.180.218
[root@minion-1 ~]# yum install -y httpd-tools
4.Minion-1用ab命令压测
[root@minion-1 ~]# ab -c 1000 -n 5000000 http://192.168.200.201:30417/index.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.200.201 (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 6202 requests completed
5.查看这边hpa×××部分的变化
[root@master ~]# kubectl describe hpa
Name: myapp
Namespace: default
Labels:
Annotations:
CreationTimestamp: Wed, 24 Oct 2018 16:33:48 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 102% (51m) / 50%
Min replicas: 1
Max replicas: 8
Deployment pods: 1 current / 3 desired
Conditions:
6.查看pod扩展出两个:(这个扩容它是根据自己cpu负载计算的)
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 1h
myapp-6985749785-rf8vb 1/1 Running 0 24s
myapp-6985749785-zx2fv 1/1 Running 0 24s
7.等峰值过去了会自动缩容:(缩容的延迟时间可以自己设定,默认会有延迟)
[root@master ~]# kubectl describe hpa
Name: myapp
Namespace: default
Labels:
Annotations:
CreationTimestamp: Wed, 24 Oct 2018 16:33:48 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (0) / 50%
Min replicas: 1
Max replicas: 8
Deployment pods: 3 current / 3 desired
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 1h
myapp-6985749785-rf8vb 1/1 Running 0 4m
myapp-6985749785-zx2fv 1/1 Running 0 4m
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 1h
默认hpa使用的是v1控制器
8.创建v2控制器
[root@master ~]# vim hpa-v2-demo.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa-v2
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 1
maxReplicas: 10
metrics:
Benchmarking 192.168.200.201 (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 781 requests completed
[root@minion-1 ~]# ab -c 1000 -n 500000 http://192.168.200.201:30417/index.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.200.201 (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 10512 requests completed
[root@master ~]# kubectl describe hpa
Name: myapp-hpa-v2
Namespace: default
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"myapp-hpa-v2","namespace":"default"},"spec":{...
CreationTimestamp: Wed, 24 Oct 2018 18:20:53 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource memory on pods: 3395584 / 50Mi
resource cpu on pods (as a percentage of request): 37% (18m) / 50%
Min replicas: 1
Max replicas: 10
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message
AbleToScale False BackoffBoth the time since the previous scale is still within both the downscale and upscale forbidden windows
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
Normal SuccessfulRescale 2m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 2h
myapp-6985749785-qdfcv 1/1 Running 0 2m
helm入门
10.1 部署tiller
下载helm包:https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
Git地址:https://github.com/helm/helm/releases/tag/v2.11.0
1.下载完上传到服务器然后解压启动:
[root@master ~]# tar xf helm-v2.11.0-linux-amd64.tar.gz
[root@master ~]# cd linux-amd64/
[root@master linux-amd64]# ls
helm LICENSE README.md tiller
[root@master linux-amd64]# mv helm /usr/bin/
2.部署tiller
[root@master linux-amd64]# cd ../
[root@master ~]# mkdir helm
[root@master ~]# cd helm
[root@master helm]# vim tiller-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
spring.data.mongodb.authentication-database=youwin_edu
spring.data.mongodb.database=youwin_edu
spring.data.mongodb.username=youwin_edu
N1w_2xE6MTQ2ODk5Nj_edu
2.创建一个myapp的helm
[root@master helm]# helm create myapp
Creating myapp
3.会自动生成模板文件:
[root@master helm]# tree myapp/
myapp/
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress.yaml
│ ├── NOTES.txt
│ └── service.yaml
└── values.yaml
4.打包myapp这个项目:
[root@master helm]# helm package myapp/
Successfully packaged chart and saved it to: /root/helm/myapp-0.0.1.tgz
[root@master helm]# ls
myapp myapp-0.0.1.tgz tiller-rbac.yaml
5.启动helm本地仓库服务:
[root@master helm]# helm serve
Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
[root@master ~]# helm search myapp #搜索有信息说明启动了或者查看8879端口
NAME CHART VERSION APP VERSION DESCRIPTION
local/myapp 0.0.1 1.0 A Helm chart for Kubernetes
6.安装myapp:
[root@master helm]helm install --name myapp-1 local/myapp
NAME: myapp-1
LAST DEPLOYED: Mon Oct 29 15:42:30 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1beta2/Deployment
NAME AGE
myapp-1 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
myapp-1-847d9b9676-6lzzl 0/1 Pending 0 0s
==> v1/Service
NAME AGE
myapp-1 0s
NOTES:
[root@master helm]# kubectl get pods #可能文件配置有问题
NAME READY STATUS RESTARTS AGE
myapp-1-847d9b9676-6lzzl 0/1 InvalidImageName 0 39s
myapp-6985749785-pz8vg 1/1 Running 3 4d
7.删除:
[root@master helm]# helm delete --purge myapp-1
release "myapp-1" deleted
8.添加仓库:stable仓库里面的是稳定的
[root@master helm]# helm repo add stable https://kubernetes-charts.storage.googleapis.com
[root@master helm]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/
9.添加仓库:incubator仓库里面的应用不是稳定版本,测试可以使用
[root@master helm]# helm repo add incubator http://kubernetes-charts-incubator.storage.googleapis.com
"incubator" has been added to your repositories
[root@master helm]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/
local http://127.0.0.1:8879/charts
repo_name1 https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/
incubator http://kubernetes-charts-incubator.storage.googleapis.com
部署efk日志收集:
[root@master ~]# helm fetch incubator/elasticsearch
[root@master ~]# ls
a k8s-prom
anaconda-ks.cfg k8s.sh
a.tar.gz kube-apiserver-amd64-1.11.0.tar.gz
coredns-1.1.3.tar.gz kube-controller-manager-amd64-1.11.0.tar.gz
elasticsearch-1.10.2.tgz kube-flannel.yml
[root@master helm]# tar xf elasticsearch-1.10.2.tgz
[root@master helm]# cd elasticsearch
修改文件:
[root@master elasticsearch]# vim values.yaml
将数量修改成1,因为资源不够,将存储卷关闭
er to form a cluster.
MINIMUM_MASTER_NODES: "1"
client:
name: client
replicas: 1
master:
name: master
exposeHttp: false
replicas: 1
heapSize: "512m"
persistence:
enabled: false
accessMode: ReadWriteOnce
name: data
size: "4Gi"
data:
name: data
exposeHttp: false
replicas: 1
heapSize: "1536m"
persistence:
enabled: false
安装es
[root@master elasticsearch]# kubectl create namespace efk
[root@master elasticsearch]# helm install --name els1 --namespace=efk -f values.yaml incubator/elasticsearch
NAME: els1
LAST DEPLOYED: Tue Oct 30 10:48:51 2018
NAMESPACE: efk
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/Deployment
NAME AGE
els1-elasticsearch-client 1s
==> v1beta1/StatefulSet
els1-elasticsearch-data 1s
els1-elasticsearch-master 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
els1-elasticsearch-client-7667b8455f-cmbpd 0/1 Init:0/1 0 1s
els1-elasticsearch-data-0 0/1 Init:0/2 0 1s
els1-elasticsearch-master-0 0/1 Init:0/2 0 0s
==> v1/ConfigMap
NAME AGE
els1-elasticsearch 1s
==> v1/Service
els1-elasticsearch-client 1s
els1-elasticsearch-discovery 1s
NOTES:
The elasticsearch cluster has been installed.
Please note that this chart has been deprecated and moved to stable.
Going forward please use the stable version of this chart.
Elasticsearch can be accessed:
Within your cluster, at the following DNS name at port 9200:
els1-elasticsearch-client.efk.svc
From outside the cluster, run these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace efk -l "app=elasticsearch,component=client,release=els1" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
kubectl port-forward --namespace efk $POD_NAME 9200:9200
输出这状态信息用status也可以看:
[root@master elasticsearch]# helm status els1
日志收集不完整,由于机器配置问题。
部署Traefik
Traefik
Traefik是一个用Golang开发的轻量级的Http反向代理和负载均衡器。由于可以自动配置和刷新backend节点,目前可以被绝大部分容器平台支持,例如Kubernetes,Swarm,Rancher等。由于traefik会实时与Kubernetes API交互,所以对于Service的节点变化,traefik的反应会更加迅速。总体来说traefik可以在Kubernetes中完美的运行.
Traefik 还有很多特性如下:
• 速度快
• 不需要安装其他依赖,使用 GO 语言编译可执行文件
• 支持最小化官方 Docker 镜像
• 支持多种后台,如 Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS 等等
• 支持 REST API
• 配置文件热重载,不需要重启进程
• 支持自动熔断功能
• 支持轮训、负载均衡
• 提供简洁的 UI 界面
• 支持 Websocket, HTTP/2, GRPC
• 自动更新 HTTPS 证书
• 支持高可用集群模式
接下来我们使用 Traefik 来替代 Nginx + Ingress Controller 来实现反向代
理和服务暴漏。
那么二者有什么区别呢?简单点说吧,在 Kubernetes 中使用 nginx 作为前端负载均衡,通过 Ingress Controller 不断的跟 Kubernetes API 交互,实时获取后端 Service、Pod 等的变化,然后动态更新 Nginx 配置,并刷新使配置生效,来达到服务自动发现的目的,而 Traefik 本身设计的就能够实时跟 Kubernetes API 交互,感知后端 Service、Pod 等的变化,自动更新配置并热重载。大体上差不多,但是 Traefik 更快速更方便,同时支持更多的特性,使反向代理、负载均衡更直接更高效。
11.1部署traefik负载均衡
1.下载下来服务的yaml文件
[root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml
[root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml
[root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml
2.创建rbac:
[root@master ~]# kubectl apply -f ./traefik-rbac.yaml
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
3.创建traefik ds:
[root@master ~]# vim ./traefik-ds.yaml
#少一行type: NodePort
[root@master ~]# kubectl apply -f ./traefik-ds.yaml
serviceaccount/traefik-ingress-controller unchanged
daemonset.extensions/traefik-ingress-controller created
service/traefik-ingress-service unchanged
4.查看traefik pod是否允许正常,并且在哪个node上
[root@master ~]# kubectl --namespace=kube-system get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
coredns-78fcdf6894-9fs99 1/1 Running 0 24m 10.244.0.2 master
coredns-78fcdf6894-vckpp 1/1 Running 0 24m 10.244.0.3 master
etcd-master 1/1 Running 0 24m 192.168.200.200 master
kube-apiserver-master 1/1 Running 0 24m 192.168.200.200 master
kube-controller-manager-master 1/1 Running 0 24m 192.168.200.200 master
kube-flannel-ds-amd64-2xtqz 1/1 Running 0 21m 192.168.200.200 master
kube-flannel-ds-amd64-fbmvf 1/1 Running 0 20m 192.168.200.201 minion-1
kube-flannel-ds-amd64-w76wq 1/1 Running 0 20m 192.168.200.202 minion-2
kube-proxy-b8r7m 1/1 Running 0 20m 192.168.200.202 minion-2
kube-proxy-t2528 1/1 Running 0 24m 192.168.200.200 master
kube-proxy-zkgdl 1/1 Running 0 20m 192.168.200.201 minion-1
kube-scheduler-master 1/1 Running 0 24m 192.168.200.200 master
traefik-ingress-controller-5hxnj 1/1 Running 0 3m 10.244.2.3 minion-2
traefik-ingress-controller-6f6d87769d-vn6n4 1/1 Running 0 4m 10.244.2.2 minion-2
traefik-ingress-controller-kv6x7 1/1 Running 0 3m 10.244.1.2 minion-1
5.创建traefik的UI:
[root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
[root@master ~]# kubectl apply -f ./ui.yaml
service/traefik-web-ui created
ingress.extensions/traefik-web-ui created
6.测试,创建nginx的pod:
[root@master ~]# vim nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
template:
metadata:
labels:
name: nginx-svc
namespace: default
spec:
selector:
run: ngx-pod
ports:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ngx-pod
spec:
replicas: 4
template:
metadata:
labels:
run: ngx-pod
spec:
containers:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ngx-ing
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
hostNetwork: true
volumes:
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
至此traefik部署完成。
转载于:https://blog.51cto.com/qingfeng00/2347509