KEDA可以对事件驱动的Kubernetes工作负载进行细粒度的自动缩放(包括从零到零的自动缩放)。 KEDA充当Kubernetes Metrics服务器,允许用户使用专用的Kubernetes自定义资源定义来定义自动缩放规则。
KEDA可以在云和边缘上运行,可以与Kubernetes组件(例如Horizontal Pod Autoscaler)进行本地集成,并且没有外部依赖性。
工作原理
KEDA在Kubernetes中扮演着两个关键角色。首先,它充当代理来激活和停用部署,以在无事件的情况下从零扩展到零。其次,它充当Kubernetes指标服务器,将丰富的事件数据(例如队列长度或流滞后)暴露给水平Pod自动缩放器以推动横向扩展。然后由部署决定是否直接从源中使用事件。这样可以保留丰富的事件集成,并使完成或放弃队列消息之类的手势可以立即使用。
Event sources and scalers
KEDA有许多“scalers”,它们既可以检测是否应激活或停用部署,也可以提供特定事件源的自定义指标。今天,对以下内容提供了缩放器支持:
- AWS CloudWatch
- AWS Simple Queue Service
- Azure Event Hub†
- Azure Service Bus Queues and Topics
- Azure Storage Queues
- GCP PubSub
- Kafka
- Liiklus
- Prometheus
- RabbitMQ
- Redis Lists
当然其他事件源正在增加中,如下:
规划中
- Azure IoT Hub#214
- Azure Storage Blobs#154
- Azure Cosmos DB#232
- Azure Monitor
- Azure Durable Functions
待规划
- AWS Kinesis
- Kubernetes Events
- MongoDB
- CockroachDB
- MQTT
ScaledObject自定义资源定义
为了使部署与事件源同步,需要部署ScaledObject自定义资源。 ScaledObjects包含有关要扩展的部署的信息,事件源的元数据(例如,连接字符串密钥,队列名称),轮询间隔和冷却时间。 ScaledObject将产生相应的自动扩展资源(HPA定义)以扩展部署。删除ScaledObjects时,将清除相应的HPA定义。
例如:
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: kafka-scaledobject
namespace: default
labels:
deploymentName: azure-functions-deployment
spec:
scaleTargetRef:
deploymentName: azure-functions-deployment
pollingInterval: 30
triggers:
- type: kafka
metadata:
# Required
brokerList: localhost:9092
consumerGroup: my-group # Make sure that this consumer group name is the same one as the one that is consuming topics
topic: test-topic
lagThreshold: "50"
部署
可以使用helm部署,也可以yaml部署。利用yaml部署可以执行如下操作:
kubectl apply -f KedaScaleController.yaml
KedaScaleController.yaml 如下:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: scaledobjects.keda.k8s.io
spec:
group: keda.k8s.io
version: v1alpha1
names:
kind: ScaledObject
singular: scaledobject
plural: scaledobjects
shortNames:
- sco
categories:
- keda
scope: Namespaced
additionalPrinterColumns:
- name: Deployment
type: string
JSONPath: .spec.scaleTargetRef.deploymentName
- name: Triggers
type: string
JSONPath: .spec.triggers[*].type
- name: Age
type: date
JSONPath: .metadata.creationTimestamp
validation:
openAPIV3Schema:
properties:
spec:
required: [triggers]
type: object
properties:
scaleType:
type: string
enum: [deployment, job]
pollingInterval:
type: integer
cooldownPeriod:
type: integer
minReplicaCount:
type: integer
maxReplicaCount:
type: integer
scaleTargetRef:
required: [deploymentName]
type: object
properties:
deploymentName:
type: string
containerName:
type: string
triggers:
type: array
items:
type: object
required: [type, metadata]
properties:
type:
type: string
authenticationRef:
type: object
properties:
name:
type: string
metadata:
type: object
additionalProperties:
type: string
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: triggerauthentications.keda.k8s.io
spec:
group: keda.k8s.io
version: v1alpha1
names:
kind: TriggerAuthentication
singular: triggerauthentication
plural: triggerauthentications
shortNames:
- ta
- triggerauth
categories:
- keda
scope: Namespaced
---
apiVersion: v1
kind: Namespace
metadata:
name: keda
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: keda-operator
namespace: keda
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: keda-operator-service-account-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: keda-operator
namespace: keda
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: keda:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: keda-operator
namespace: keda
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: keda-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: keda-operator
namespace: keda
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keda-operator
name: keda-operator
namespace: keda
spec:
replicas: 1
selector:
matchLabels:
app: keda-operator
template:
metadata:
labels:
app: keda-operator
name: keda-operator
spec:
serviceAccountName: keda-operator
containers:
- name: keda-operator
image: kedacore/keda:latest
args:
- /adapter
- --secure-port=6443
- --logtostderr=true
- --v=2
ports:
- containerPort: 6443
name: https
- containerPort: 8080
name: http
volumeMounts:
- mountPath: /tmp
name: temp-vol
volumes:
- name: temp-vol
emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: custom-metrics-resource-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: custom-metrics-resource-reader
subjects:
- kind: ServiceAccount
name: keda-operator
namespace: keda
---
apiVersion: v1
kind: Service
metadata:
name: keda-operator
namespace: keda
spec:
ports:
- name: https
port: 443
targetPort: 6443
- name: http
port: 80
targetPort: 8080
selector:
app: keda-operator
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.external.metrics.k8s.io
spec:
service:
name: keda-operator
namespace: keda
group: external.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-resource-reader
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- services
- external
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: keda-hpa-controller-custom-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: horizontal-pod-autoscaler
namespace: kube-system
代码架构解读
关键代码在pkg文件夹下,如下图:
- adapter和provider 主要实现了一个custom metrics的adapter,基本规范参照github.com/kubernetes-incubator/custom-metrics-apiserver。如果是外部metrics ,那么主要是实现GetExternalMetric和ListAllExternalMetrics两个方法即可。
- apis 和 client 均为k8s架手架生成的。apis主要存放 crd --ScaledObject对象的定义,而client 为keda client和 informer 等。通过crd 扩展过k8s的应该对此比较熟悉。
- controller 即为一个k8s 针对ScaledObject的控制器。实际k8s 的开发中,crd 创建了之后,必须编写对应的controller,针对crd的add,update,delete三种事件作出实际操作。
- signals 则比较简单,封装了context.Context。
- kubernetes 比较简单,总体思路就是根据config,创建kdea client 和kube client,供controller 使用。
- handler 比较关键,基本上controller 中的sync 逻辑和metrics-server 提供metrics的接口 均在这里实现的。
- scalers。就是不同事件源的实现。那么如果我们想增加一种自己的事件源,在这里实现即可。
举例说明一下,当使用客户端--kubectl 或是client-go部署一个针对deployment A 的ScaledObject crd。想根据kafaka的消息积压数目进行hpa。那么controller会监听到创建了crd,将会对新增动作做出操作。具体就是,根据crd的具体内容创建一个hpa对象,crd 的spec 内容会转换成hpa 。此时官方k8s的hpa就会通过scalers中的kafka scaler 读取kafka指定topic的消息数目,然后最终由hpa controller 做出是否扩缩的决定。
结论
KEDA 目前处于Experimental Phase 阶段。微软和红帽希望社区共同参与。
KEDA 并没有实现了自己的HPA,其实最终起作用的依旧是社区中的HPA,他只是根据crd 内容生成了HPA 对象,只不过这个metrics 是外部metrics。KEDA 主要是集成了各种事件源。