apisix在k8s上的实践

一、相关文档

# 1、chart 安装apisix、apisix-dashboard、apisix-ingress-controller
https://github.com/apache/apisix-helm-chart
# 2、安装过程中的报错,可以下以下链接中搜索
https://github.com/apache/apisix-helm-chart/issues
# 3、apisix-ingress-controller相关文档和使用
https://apisix.apache.org/zh/docs/ingress-controller/practices/the-hard-way/
# 4、灰度发布相关
https://apisix.apache.org/zh/docs/ingress-controller/concepts/apisix_route

二、环境

ip 备注
192.168.13.12 k8s-master-01
192.168.13.211 k8s-node-01
192.168.13.58 k8s-node-02、nfs

提前安装helm

三、安装

3.1、storageclass安装配置

3.1.1、nfs节点上

yum install rpcbind nfs-utils -y
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
mkdir /nfs/data
[root@k8s-node-02 ~]# cat /etc/exports
/nfs/data/ *(insecure,rw,sync,no_root_squash)

3.1.2、nfs-subdir-external-provisioner

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.13.58 --set nfs.path=/nfs/data

3.1.3、sc

[root@k8s-master-01 apisix]# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  namespace: default
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  #---设置为默认的storageclass
provisioner: cluster.local/nfs-subdir-external-provisioner
parameters:
  server: 192.168.13.58
  path: /nfs/data
  readOnly: "false"

3.2、apisix安装配置

注意:需要对charts做适当修改

helm pull apisix/apisix
tar -xf apisix-0.9.3.tgz
在apisix/values.yaml中做如下修改<只做了添加>:
ingress-controller:                                   ## 假如要启用ingress-controller
  enabled: true
storageClass: "nfs-storage"                    ## 指定上面创建的nfs-storage
accessMode:
  - ReadWriteOnce
helm package apisix
helm install apisix apisix-0.9.3.tgz --create-namespace  --namespace apisix

安装完成后,会发现apisix-ingress-controller-6697f4849d-wdrr5这个pod一直处于init的状态,可以在https://github.com/apache/apisix-helm-chart/issues中进行搜索https://github.com/apache/apisix-helm-chart/pull/284,也可以通过查看pod日志进行解决,原因说明:

apisix-ingress-controller监听k8s apiserver crd资源,通过svc apisix-admin:9180端口连接到apisix,apisix将规则写入etcd中。但日志显示controller一直监听:apisix-admin.ingress-apisix.svc.cluster.local:9180,而svc和pod都部署在apisix的ns下,所以需要修改两个地方为:apisix-admin.apisix.svc.cluster.local:9180,分别为:
kubectl edit deployment apisix-ingress-controller -n apisix
kubectl edit configmap apisix-configmap -n apisix
然后删除pod apisix-ingress-controller,重新生成。

后续查看pod apisix-ingress-controller得日志,常有如下报错:


apisix-ingress-controller err log

这一般是sa权限设置不当造成的,在此我贴一份我的配置,后续看到有报错日志,在适当地方修改即可:

[root@k8s-master-01 apisix]# cat 12-sa.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: apisix-ingress-controller
  namespace: apisix
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: apisix-clusterrole
  namespace: apisix
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - persistentvolumeclaims
      - pods
      - replicationcontrollers
      - replicationcontrollers/scale
      - serviceaccounts
      - services
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - bindings
      - events
      - limitranges
      - namespaces/status
      - pods/log
      - pods/status
      - replicationcontrollers/status
      - resourcequotas
      - resourcequotas/status
    verbs:
      - get
      - list
      - watch
      - create
      - delete
      - update
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - controllerrevisions
      - daemonsets
      - deployments
      - deployments/scale
      - replicasets
      - replicasets/scale
      - statefulsets
      - statefulsets/scale
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - autoscaling
    resources:
      - horizontalpodautoscalers
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - batch
    resources:
      - cronjobs
      - jobs
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - daemonsets
      - deployments
      - deployments/scale
      - ingresses
      - networkpolicies
      - replicasets
      - replicasets/scale
      - replicationcontrollers/scale
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - policy
    resources:
      - poddisruptionbudgets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - metrics.k8s.io
    resources:
      - pods
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apisix.apache.org
    resources:
      - apisixroutes
      - apisixroutes/status
      - apisixupstreams
      - apisixupstreams/status
      - apisixtlses
      - apisixtlses/status
      - apisixclusterconfigs
      - apisixclusterconfigs/status
      - apisixconsumers
      - apisixconsumers/status
      - apisixpluginconfigs
      - apisixpluginconfigs/status

    verbs:
      - get
      - list
      - watch
      - create
      - update
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: apisix-clusterrolebinding
  namespace: apisix
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: apisix-clusterrole
subjects:
  - kind: ServiceAccount
    name: apisix-ingress-controller
    namespace: apisix

3.3、dashboard安装

helm install apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace apisix

3.4、资源查看

由于在“3.2”中已经启用了ingress-controller,在此不再按照教程再装一次。


apisix
svc->apisix-admin    9180       pod->apisix: 9180       操作routes、streams、consumers等的端口
svc->apisix-gateway  80:30761   pod->apisix: 9080       访问应用url的接口 

3.5、使用

3.5.1、创建pod

kubectl run httpbin --image-pull-policy=IfNotPresent --image kennethreitz/httpbin --port 80
kubectl expose pod httpbin --port 80

3.5.2、创建crd

[root@k8s-master-01 apisix]# cat ApisixRoute.yaml
apiVersion: apisix.apache.org/v2beta3 
kind: ApisixRoute
metadata:
  name: httpserver-route
spec:
  http:
  - name: httpbin
    match:
      hosts:
      - local.httpbin.org
      paths:
      - /*
    backends:
      - serviceName: httpbin
        servicePort: 80

注意在此用的apiVersion为“apisix.apache.org/v2beta3 ”,查看apisix-ingress-controller的日志会发现有如下报错:

Failed to watch *v2beta1.ApisixRoute: failed to list *v2beta1.ApisixRoute: the server could not find the requested resource (get apisixroutes.apisix.apache.org)

这时修改configmap即可:


apisix-ingress-controller log errir

图中标记处,在修改之前为:v2beta1,和我们使用的apiversion不匹配,所以会报错,修改即可。

3.5.3、测试

[root@k8s-master-01 apisix]# kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl "http://127.0.0.1:9080/get" -H 'Host: local.httpbin.org'
{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Host": "local.httpbin.org", 
    "User-Agent": "curl/7.79.1", 
    "X-Forwarded-Host": "local.httpbin.org"
  }, 
  "origin": "127.0.0.1", 
  "url": "http://local.httpbin.org/get"
}
[root@k8s-master-01 apisix]# curl http://192.168.13.12:30761/get -H "Host: local.httpbin.org"
{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Host": "local.httpbin.org", 
    "User-Agent": "curl/7.29.0", 
    "X-Forwarded-Host": "local.httpbin.org"
  }, 
  "origin": "20.10.151.128", 
  "url": "http://local.httpbin.org/get"
}

3.6、灰度发布

# 本文使用过的文档
https://api7.ai/blog/traffic-split-in-apache-apisix-ingress-controller

3.6.1 stable版本

[root@k8s-master-01 canary]# cat 1-stable.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-stable-service
  namespace: canary
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http-port
  selector:
    app: myapp
    version: stable
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-stable
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: stable
  template:
    metadata:
      labels:
        app: myapp
        version: stable
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/public-registry-fzh/myapp:v1
        imagePullPolicy: IfNotPresent
        name: myapp-stable
        ports:
        - name: http-port
          containerPort: 80
        env:
        - name: APP_ENV
          value: stable

3.6.2 canary版本

[root@k8s-master-01 canary]# cat 2-canary.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-canary-service
  namespace: canary
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http-port
  selector:
    app: myapp
    version: canary
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-canary
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: canary
  template:
    metadata:
      labels:
        app: myapp
        version: canary
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/public-registry-fzh/myapp:v2
        imagePullPolicy: IfNotPresent
        name: myapp-canary
        ports:
        - name: http-port
          containerPort: 80
        env:
        - name: APP_ENV
          value: canary

3.6.3 基于weight的灰度发布

[root@k8s-master-01 canary]# cat 3-apisixroute-weight.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute
  namespace: canary 
spec:
  http:
  - name: myapp-canary-rule
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
      weight: 10
    - serviceName: myapp-canary-service
      servicePort: 80
      weight: 5

测试:


基于权重的灰度发布

canary和stable的比例约为:2:1。

3.6.4 基于优先级的灰度发布

流量会优先打入优先级高的pod

[root@k8s-master-01 canary]# kubectl apply -f priority.yaml
apisixroute.apisix.apache.org/myapp-canary-apisixroute2 created
[root@k8s-master-01 canary]# cat 4-ap.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute2
  namespace: canary
spec:
  http:
  - name: myapp-stable-rule2
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule2
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

测试:


基于优先级的灰度发布

流量会优先打入myapp-canary-service。

3.6.5 基于参数的灰度发布

[root@k8s-master-01 canary]# cat vars.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute3
  namespace: canary 
spec:
  http:
  - name: myapp-stable-rule3
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule3
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
      exprs:
      - subject:
          scope: Query
          name: id
        op: In
        set:
        - "3"
        - "13"
        - "23"
        - "33"
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

测试:


基于条件的灰度发布

符合提交的流量会打入myapp-canary-service,否则打入myapp-stable-service。

3.6.5 基于header的灰度发布

[root@k8s-master-01 canary]# cat canary-header.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute3
  namespace: canary 
spec:
  http:
  - name: myapp-stable-rule3
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule3
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
      exprs:
      - subject:
          scope: Header
          name: canary
        op: RegexMatch
        value: ".*myapp.*"
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

测试:


基于header的灰度发布

你可能感兴趣的:(apisix在k8s上的实践)