跨群集同步资源: 通过将多个集群组成一个联邦,可以在多个群集中的保持资源同步。 例如,可以确保多个群集中部署相同的程序。
跨群集发现:联邦提供了自动配置DNS服务器和负载均衡器与所有群集后端的功能。
概念 | 描述 |
---|---|
联邦 | 一组Kubernetes集群,提供一个集群组成一个大资源的池子的接口,该接口可用于在这些集群之间部署Kubernetes应用程序。 |
联邦化 | 使用户将k8s 集群里的资源,服务发现,和高可用应用在多集群之上 |
主集群 | 用于暴露KubeFed API 并且运行 KubeFed control plane. |
成员集群 | 通过KubeFed API 加入联邦,且主集群可以控制的集群,主集群也可以是成员集群。 |
实现准备好2个k8s 集群。
部署helm,注意不想的话,就用alauda 自己的镜像库
curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz
tar -zxvf helm-v2.14.1-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm init --service-account tiller
如果是kubernetes 1.16+的话:
helm init --service-account tiller --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@ replicas: 1@ replicas: 1\n selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' | kubectl apply -f -
deployment里把tiller地址改成alauda地址index.alauda.cn/claas/tiller:v2.14.1
Rbac规则调整下
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
EOF
安装kubefedctl 二进制
VERSION=0.1.0-rc6
OS=linux
ARCH=amd64
curl -LO https://github.com/kubernetes-sigs/kubefed/releases/download/v${VERSION}/kubefedctl-${VERSION}-${OS}-${ARCH}.tgz
tar -zxvf kubefedctl-*.tgz
chmod u+x kubefedctl
sudo mv kubefedctl /usr/local/bin/
准备两个集群,一个叫做global,一个叫做business。
添加kubectl config 文件,类似这样:export KUBECONFIG=/root/lmxia/cluster-config/config-cluster.yaml
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: https://10.0.128.95:6443
insecure-skip-tls-verify: true
name: global
- cluster:
server: https://10.0.129.168:6443
insecure-skip-tls-verify: true
name: business
users:
- name: global-admin
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tNmw3NGIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImNkN2YxOTcxLTgxZTYtMTFlOS04MzFlLTUyNTQwMGNiNDMyZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.GnEEx8yaYaik78Q1rQiFiP3Mhscxvn2nXKfSpEhSgfxytsT7V7I5ftuibq4zGxLYs1z8U0N_0tcQXq3DBL97-4kNfQdCMx42QOzV4yDvRNI_DFlM5oICjZeiVMRHSWCWUZEVyas9zb9G23MJ8uU4C3kesoOz9ycL61-fTYnx_99wNvycFlyEnb614cWMkCq6ji1gfa52Iei7u5y7CSyHxj0z09e5WS4JOjVjtGTGVBFevlsj1qZjeh4otqprLU8fVixonuFgT8X-CFBXz-1KjmugGq9tR7EIFqgbmwx-ZYNsAjGkP2iOUnPs9POCOtZh7Fj5lqyqUjeO7FqyU778IQ
- name: business-admin
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVycm9sZS1hZ2dyZWdhdGlvbi1jb250cm9sbGVyLXRva2VuLXA5ZDlmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXJyb2xlLWFnZ3JlZ2F0aW9uLWNvbnRyb2xsZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlYTE5YmJhMS04MWYyLTExZTktOTQ3Mi01MjU0MDA3YjFiMjIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3RlcnJvbGUtYWdncmVnYXRpb24tY29udHJvbGxlciJ9.jI6M3ErK3mlAGk-cCtuhdS4H34-4Wl7yBGMWxOpqhaczHtlIi3H1dZHzDySNihTsy8kcpbJRhz-vkitYslKkqIKqbtkvUNTSAivXSxieSKbtb7jeCt1-6OUgoQTNRcYfpRK7Ur-2Y8XcRazcxr6i-tIODhljBmSd9mT32jLtyoDNsraE9o7-c4eEvA5DxkM6BlvpVHSh6cnBN4UhJ7qKo-M1g1ZSyAIGwN-Zd_0uQjcnAA5AqyzRUGQJ2AalXy5IGDescE2sL8mbOJqOHbB9PRNmI2vCcVzyhDtjmfR9o-EToPRvn8bUqqV3ulmznMGNM9SkLNUWUQCU1ZMkrP2ATg
contexts:
- context:
cluster: business
user: business-admin
name: business
- context:
cluster: global
user: global-admin
name: global
其中token 是一个比较高权的token,怎么来的呢?
kubectl describe secret -n kube-system $(kubectl get secret -n kube-system | awk '/clusterrole-aggregation-controller-token/ {print $1}') | awk '/^token:/{print $2}'
接下来就可以做联邦集群相关的事儿了。
kubefedctl join business --cluster-context business --host-cluster-context global --v=2
会发现,做了什么呢?
在business集群中创建了一个namespace: kube-federation-system
在business集群里, 创建了一个service account: business-global,当然还有secret,business-global-token-kr8dz,同步在global里也会创建一个secret:business-66qpm,这个两个secret 的内容完全一样。
在business集群中创建了一个clusterrole 和 clusterrolebinding, 这个clusterrole 具有全部的resource 的全部权限
最重要的,在global集群里: federated cluster resource。这个crd 的一个实例,描述了一个集群的信息。
kubectl get kubefedclusters -n kube-federation-system -o yaml
kubefedctl join global --cluster-context global --host-cluster-context global --v=2
类似上一步。
重要概念:
kubefedclusters.core.kubefed.io, KubeFedCluster configures KubeFed to be aware of a Kubernetes clusterand encapsulates the details necessary to communicate with the cluster.
默认会创建10种常见的透传资源,如果crd 的话,必须每一个成员集群都要安装,看到默认的,如果是core api 就把group 名字给省略了。
[root@ake-master1 cluster-config]# kubectl get FederatedTypeConfig -n kube-federation-system
NAME AGE
clusterroles.rbac.authorization.k8s.io 2h
configmaps 2h
deployments.apps 6s
ingresses.extensions 2h
jobs.batch 2h
namespaces 2h
replicasets.apps 2h
secrets 2h
serviceaccounts 2h
services 2h
如果需要使能新的接口:
[root@ake-master1 cluster-config]# kubefedctl enable --host-cluster-context global deployments.apps
customresourcedefinition.apiextensions.k8s.io/federateddeployments.types.kubefed.io updated
federatedtypeconfig.core.kubefed.io/deployments.apps created in namespace kube-federation-system
会发现这个命令
FederatedDeployments
,group默认: types.kubefed.io重点来了:
FederatedTypeConfig将federated type CRD与目标kubernetes类型相关联,从而可以将给定类型的联合资源透传到成员集群。
以deployments.apps 为例,如图:
[root@ake-master1 cluster-config]# kubectl get FederatedTypeConfig -n kube-federation-system deployments.apps -o yaml
apiVersion: core.kubefed.io/v1beta1
kind: FederatedTypeConfig
metadata:
creationTimestamp: "2019-10-10T08:33:05Z"
finalizers:
- core.kubefed.io/federated-type-config
generation: 1
name: deployments.apps
namespace: kube-federation-system
resourceVersion: "68639766"
selfLink: /apis/core.kubefed.io/v1beta1/namespaces/kube-federation-system/federatedtypeconfigs/deployments.apps
uid: 94724cf9-eb38-11e9-b9d9-525400e36779
spec:
federatedType:
group: types.kubefed.io
kind: FederatedDeployment
pluralName: federateddeployments
scope: Namespaced
version: v1beta1
propagation: Enabled
targetType:
group: apps
kind: Deployment
pluralName: deployments
scope: Namespaced
version: v1
status:
observedGeneration: 1
propagationController: Running
statusController: NotRunning
这里有两个关键的字段:**federatedType **和 targetType,其中federatedType指的是我们enabled 过的 资源。上文提到enable 的话,会创建一个相关资源的federated,和一个FederatedTypeConfig(本质上就是个映射)。
使用联邦资源以前需要检查下是不是所有资源类型在成员集群内都具备:
CLUSTER_CONTEXTS="global business"
for c in ${CLUSTER_CONTEXTS}; do echo ----- ${c} -----; kubectl --context=${c} api-resources --api-group=aiops.alauda.io; done
我们看到aiops 这个组,business 集群就没有,所以我们不能创建该类资源的联邦资源时就可能会出错。
kubefedctl federate
这个命令会从k8s 资源创造出一个联邦资源,它会抽象该资源到对应的federatedType,默认会保存到和k8s 资源同一个ns里,比如这里,我们先创建一个secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
kubectl apply -f mysecret.yaml -n test
, 然后
[root@ake-master1 lmxia]# kubefedctl federate secrets mysecret -n lmxia --host-cluster-context global
W1010 18:15:30.554209 31227 federate.go:410] Annotations defined for Secret "test/mysecret" will not appear in the template of the federated resource: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","data":{"password":"MWYyZDFlMmU2N2Rm","username":"YWRtaW4="},"kind":"Secret","metadata":{"annotations":{},"name":"mysecret","namespace":"test"},"type":"Opaque"}
]
I1010 18:15:34.601079 31227 federate.go:503] Successfully created FederatedSecret "test/mysecret" from Secret
开心的去看下联邦的效果时,也许会失望,并没有在business 集群中创建出来。查原因:
[root@ake-master1 lmxia]# kubectl get FederatedSecret -n lmixa mysecret -o yaml
apiVersion: types.kubefed.io/v1beta1
kind: FederatedSecret
metadata:
creationTimestamp: "2019-10-10T10:15:34Z"
finalizers:
- kubefed.io/sync-controller
generation: 1
name: mysecret
namespace: test
resourceVersion: "68695902"
selfLink: /apis/types.kubefed.io/v1beta1/namespaces/test/federatedsecrets/mysecret
uid: e5b86821-eb46-11e9-a4fa-525400ec6688
spec:
placement:
clusterSelector:
matchLabels: {}
template:
data:
password: MWYyZDFlMmU2N2Rm
username: YWRtaW4=
type: Opaque
status:
conditions:
- lastTransitionTime: "2019-10-10T10:15:34Z"
lastUpdateTime: "2019-10-10T10:15:34Z"
reason: NamespaceNotFederated
status: "False"
type: Propagation
NamespaceNotFederated, 所以事先我们需要先把ns 给联邦管理起来。
kubefedctl federate namespace lmxia --host-cluster-context global
这时候会发现成功了:
[root@ake-master1 lmxia]# kubectl get FederatedSecret mysecret -n lmxia -o yaml
apiVersion: types.kubefed.io/v1beta1
kind: FederatedSecret
metadata:
creationTimestamp: "2019-10-10T11:27:13Z"
finalizers:
- kubefed.io/sync-controller
generation: 1
name: mysecret
namespace: lmxia
resourceVersion: "68732031"
selfLink: /apis/types.kubefed.io/v1beta1/namespaces/lmxia/federatedsecrets/mysecret
uid: e7ef75e9-eb50-11e9-b9d9-525400e36779
spec:
placement:
clusterSelector:
matchLabels: {}
template:
data:
password: MWYyZDFlMmU2N2Rm
username: YWRtaW4=
type: Opaque
status:
clusters:
- name: business
- name: global
conditions:
- lastTransitionTime: "2019-10-10T11:27:17Z"
lastUpdateTime: "2019-10-10T11:27:17Z"
status: "True"
type: Propagation
直接从联邦资源创建:
apiVersion: types.kubefed.io/v1beta1
kind: FederatedSecret
metadata:
finalizers:
- kubefed.io/sync-controller
name: mysecret
namespace: lmxia
spec:
placement:
clusterSelector:
matchLabels: {}
template:
data:
password: MWYyZDFlMmU2N2Rm
username: YWRtaW4=
type: Opaque
会发现会直接在所有集群内,创建出来这个secret。
这里有3个重要字段:
Templates 所以集群都按这个配置
Placement 指定哪些集群需部署该资源
Overrides 覆盖template 中的配置,针对特定集群做不同的设置
删除联邦资源有一个,Pre-deletion cleanup,删除前的clean up,它会把所有成员集群的该k8s资源全部删掉。
kubectl delete FederatedSecrets mysecret --host-cluster-context global -n lmxia
但是有没有办法干掉联邦资源的时候,保存k8s资源呢,可以修改删除策略。
保留孤儿:
kubefedctl orphaning-deletion enable FederatedSecrets mysecret -n lmxia --host-cluster-context global
再次删除的时候:
kubectl delete FederatedSecrets mysecret -n lmxia
发现成员集群的资源都保存着。
一个更加通用的使用方式是,不同的集群,不同的镜像仓库,不同的集群不同的副本数量,不同的secret 设置(或者说数据库设置)我们通过部署一个deployment和一个secret 来展示:
apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
finalizers:
- kubefed.io/sync-controller
name: test-deployment
namespace: fed-ns-1
spec:
overrides:
- clusterName: feder2
clusterOverrides:
- path: /spec/replicas
value: 3
- path: /spec/template/spec/containers/0/image
value: nginx:1.17.0-alpine
placement:
clusters:
- name: feder1
- name: feder2
template:
metadata:
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: index.alauda.cn/alauda/hello-world:latest
name: hello
volumeMounts:
- mountPath: /etc/foo
name: foo
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
status:
clusters:
- name: feder1
- name: feder2
apiVersion: types.kubefed.io/v1beta1
kind: FederatedSecret
metadata:
finalizers:
- kubefed.io/sync-controller
name: mysecret
namespace: fed-ns-1
spec:
overrides:
- clusterName: feder2
clusterOverrides:
- path: /data/password
value: MTIxMzI4dTI5NDczMjQ3
placement:
clusterSelector:
matchLabels: {}
template:
data:
password: MWYyZDFlMmU2N2Rm
username: YWRtaW4=
type: Opaque
status:
clusters:
- name: feder1
- name: feder2
上述deployment 的yaml 中,override 字段里feder2 集群,要求部署3个实例,且镜像仓库为docker hub的nginx:1.17.0-alpine,默认模版是index.alauda.cn/alauda/hello-world:latest,且副本是2。这个deployment 挂载了 secret,secret其中保存了deployment 的数据库地址配置,也会是一个联邦资源,这样部署后,会发现ferder2 集群3个实例,且镜像为nginx,挂载到容器里的密码和 feder1 是不同的。
创建svc
apiVersion: types.kubefed.io/v1beta1
kind: FederatedService
metadata:
finalizers:
- kubefed.io/sync-controller
name: test-service
namespace: fed-ns-1
spec:
placement:
clusterSelector:
matchLabels: {}
template:
metadata:
labels:
app: nginx
spec:
externalTrafficPolicy: Cluster
ports:
- name: nginx
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: NodePort
status:
clusters:
- name: feder2
- name: feder1
发现无法产生loadballance 类型的svc,暂时不做了。