K8S学习之helm

helm 是tiller的客户端,tiller是守护进程,helm 请求,tiller接收,与api交互完成创建

helm 包管理工具,下载chart图表, chart部署实例release,chart里有个values.yaml,定义值进行模板安装,其参数可以修改

helm吧k8s的资源打包到图表中

Helm相当于linux环境下的yum包管理工具。

helm是一k8s中的一个命令行客户端工具,helm是tiller的客户端,tiller是一个守护进程,接收helm的请求,helm把请求交给tiller,tiler和apiserver交互,由apiserver负责完成创建,我们用哪个chart需要下载到本地,基于本地这个chart部署实例,这个部署的实例叫做release

chart:一个helm程序包,比方说我们部署nginx,需要deployment的yaml,需要service的yaml,这两个清单文件就是一个helm程序包,在k8s中把这些yaml清单文件叫做chart图表,vlues.yaml文件为模板中的文件赋值,可以实现我们自定义安装

如果是chart开发者需要自定义模板,如果是chart使用者只需要修改values.yaml即可

helm把kubernetes资源打包到一个chart中,制作并完成各个chart和chart本身依赖关系并利用chart仓库实现对外分发,而helm还可实现可配置的对外发布,通过values.yaml文件完成可配置的发布,如果chart版本更新了,helm自动支持滚更更新机制,还可以一键回滚,但是不是适合在生产环境使用,除非具有定义自制chart的能力

helm属于kubernetes一个项目:下载地址:

https://github.com/helm/helm/releases

找这个Linux amd64 checksum的,解压之后按下面解压即可

helm官方网站:

https://helm.sh/

helm 官方的chart站点:

https://hub.kubeapps.com/

repository:存放chart图表的仓库,提供部署k8s应用程序需要的那些yaml清单文件

release:特定的chart部署于目标集群上的一个实例

Helm: go语言开发的
核心术语:
Chart: 一个helm程序包;
Repository: Charts仓库,https/http服务器;
Release:特定的Chart部署于目标集群上的一个实例;Chart -> Config -> Release
程序架构:
helm:客户端,管理本地的Chart仓库,管理Chart,与Tiller服务器交互,发送Chart,实例安装、查询、卸载等操作
Tiller: 服务端
Tiller:服务端,接收helm发来的Charts与Config,合并生成relase;
chart—>通过values.yaml这个文件赋值–>生成release实例

安装helm客户端

[root@master ~]# tar xf helm-v2.13.1-linux-amd64.tar.gz
[root@master ~]# cd linux-amd64/
[root@master linux-amd64]# cp helm /usr/local/bin/
[root@master linux-amd64]# helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find tiller
#此时未安装tillter

安装helm服务端tiller

rbac.yaml参考链接如下:

https://github.com/helm/helm/blob/master/docs/rbac.md

[root@master ~]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
[root@master ~]# kubectl apply -f rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
[root@master ~]# kubectl get sa -n kube-system | grep tiller
tiller                               1         7m18s
[root@master ~]# docker load -i tiller_2_13_1.tar.gz
[root@master ~]# vim tiller.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  selector:
    matchLabels:
     app: helm
     name: tiller
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      serviceAccount: tiller
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.13.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}
[root@master ~]# kubectl apply -f tiller.yaml
deployment.apps/tiller-deploy created
service/tiller-deploy created
[root@master ~]# kubectl get pod -n kube-system | grep tiller
tiller-deploy-7bd89687c8-tjnkp     1/1     Running                0          28s
[root@master ~]# helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

创建一个chart实例

[root@master ~]# helm create test
Creating test
[root@master ~]# cd test/
[root@master test]# ls
charts  Chart.yaml  templates  values.yaml
[root@master test]# tree
.
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

3 directories, 8 files

Chart.yaml 用来描述当前chart有哪属性信息,存放当前程序包的元数据信息,包的名字,版本等,跟部署k8s应用无关系,只是记录chart的信息的

[root@master test]# cat Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: test
version: 0.1.0

templates 模板,定义k8s的yaml文件,大量调用go语言的语法,跟ansible的playbook一样,ansible的playbook也可以使用模板

[root@master templates]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "test.fullname" . }}
  labels:
    app.kubernetes.io/name: {{ include "test.name" . }}
    helm.sh/chart: {{ include "test.chart" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app.kubernetes.io/name: {{ include "test.name" . }}
      app.kubernetes.io/instance: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: {{ include "test.name" . }}
        app.kubernetes.io/instance: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
    {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
    {{- end }}

values.yaml 为模板中的每一个属性提供值的。如这里默认的image: repository: nginx,将被templates里的deployment.yaml调用,
对应为 image: “{{ .Values.image.repository }}:{{ .Values.image.tag }}”

[root@master test]# cat values.yaml
# Default values for test.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: nginx
  tag: stable
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []

  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

helm install . 创建的chart部署k8s应用

执行命令,创建test目录里文件描述的应用

[root@master test]# helm install .
NAME:   halting-lizard  #pod的命名将相匹配
LAST DEPLOYED: Thu Jan  6 09:00:32 2022
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME                 READY  UP-TO-DATE  AVAILABLE  AGE
halting-lizard-test  0/1    1           0          1s

==> v1/Pod(related)
NAME                                 READY  STATUS             RESTARTS  AGE
halting-lizard-test-9dcd9757b-j2dvt  0/1    ContainerCreating  0         1s

==> v1/Service
NAME                 TYPE       CLUSTER-IP   EXTERNAL-IP  PORT(S)  AGE
halting-lizard-test  ClusterIP  10.101.4.80         80/TCP   1s

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=test,app.kubernetes.io/instance=halting-lizard" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:80

[root@master test]# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
halting-lizard-test-9dcd9757b-j2dvt   1/1     Running   0          71s

#执行上面NOTES的命令可以主机端口映射到80

[root@master test]# export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=test,app.kubernetes.io/instance=halting-lizard" -o jsonpath="{.items[0].metadata.name}")
[root@master test]# kubectl port-forward $POD_NAME 8088:80
Forwarding from 127.0.0.1:8088 -> 80
Forwarding from [::1]:8088 -> 80
Handling connection for 8088
新的终端访问
[root@master test]# curl 127.0.0.1:8088
Welcome to nginx!

helm list 查看有哪些release

[root@master test]# helm list
NAME    REVISION    UPDATED    	STATUS   CHART     APP VERSION     NAMESPACE
halting-lizard 1  Thu Jan  6 09:00:32 2022   DEPLOYED  test-0.1.0  1.0  default

helm delete 删除指定的release(helm list查看到的),同时删除了部署在kubernetes上的服务

[root@master ~]# helm delete halting-lizard
release "halting-lizard" deleted
[root@master ~]# kubectl get pod
NAME                                  READY   STATUS        RESTARTS   AGE
halting-lizard-test-9dcd9757b-j2dvt   0/1     Terminating   0          20m

helm package 打包chart

###生成的tgz包可以发送到任意服务器上,通过helm fetch就可以获取该chart

[root@master ~]# helm package test
Successfully packaged chart and saved it to: /root/test-0.1.0.tgz
Error: stat /root/.helm/repository/local: no such file or directory
[root@master ~]# ls
  test  test-0.1.0.tgz

helm repo list 查看chart库

[root@master ~]# mkdir -p /root/.helm/repository/
[root@master ~]# mv repositories.yaml /root/.helm/repository/
[root@master ~]# cat /root/.helm/repository/repositories.yaml
apiVersion: v1
generated: 2019-04-03T21:52:41.714422328-04:00
repositories:
- caFile: ""
  cache: /root/.helm/repository/cache/stable-index.yaml
  certFile: ""
  keyFile: ""
  name: stable
  password: ""
  url: https://cnych.github.io/kube-charts-mirror
  username: ""
[root@master ~]# helm repo list
NAME    URL
stable  https://cnych.github.io/kube-charts-mirror

helm repo add 添加repo,执行完毕后输入helm repo update进行更新

[root@master ~]# mkdir -p /root/.helm/repository/cache/
[root@master ~]# mv bitnami-index.yaml  /root/.helm/repository/cache/
[root@master ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
[root@master ~]# helm repo list
NAME    URL
stable  https://cnych.github.io/kube-charts-mirror
bitnami https://charts.bitnami.com/bitnami
[root@master ~]# helm repo update

查找chart

helm search 输出所有的chart
helm search mysql 搜索mysql chart,redis,memcached,Jenkins等等
helm inspect bitnami/mysql 查看指定chart的详细信息

[root@master ~]# helm search |wc -l
347
[root@master ~]# helm search mysql
NAME      		CHART VERSION   APP VERSION     DESCRIPTION   
bitnami/mysql      8.8.20          8.0.27     Chart to create.....
[root@master ~]# helm inspect bitnami/mysql  |head -5
annotations:
  category: Database
apiVersion: v2
appVersion: 8.0.27
description: Chart to create a Highly available MySQL cluster
....

helm 官方的chart站点:https://hub.kubeapps.com/

部署memcache

[root@master ~]# helm search memcached
[root@master ~]#  helm fetch stable/memcached
[root@master ~]# ls
memcached-2.3.1.tgz            
[root@master ~]# tar xf memcached-2.3.1.tgz
[root@master ~]# cd memcached/
[root@master memcached]# ls
Chart.yaml  README.md  templates  values.yaml
[root@master memcached]# helm  install .
Error: validation failed: unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta1"
#这里修改了apiVersion
[root@master memcached]# head -1 templates/statefulset.yaml
apiVersion: apps/v1
[root@master memcached]# helm  install .
Error: release worn-squirrel failed: StatefulSet.apps "worn-squirrel-memcached" is invalid: [spec.selector: Required value, spec.template.metadata.labels: Invalid value: map[string]string{"app":"worn-squirrel-memcached", "chart":"memcached-2.3.1", "heritage":"Tiller", "release":"worn-squirrel"}: `selector` does not match template `labels`]
###selector没有些需要加上,10-14行,label 20、22行
	10  spec:
    11    selector:
    12      matchLabels:
    13        app: {{ template "memcached.fullname" . }}
    14        release: "{{ .Release.Name }}"
    15    serviceName: {{ template "memcached.fullname" . }}
    16    replicas: {{ .Values.replicaCount }}
    17    template:
    18      metadata:
    19        labels:
    20          app: {{ template "memcached.fullname" . }}
    21          chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    22          release: "{{ .Release.Name }}"
 ###这里先把副本改成1,因为是单node节点
[root@master memcached]# grep replicaCount:  values.yaml
replicaCount: 1
[root@master memcached]# helm  install .
[root@master ~]# helm search rabbitmq-ha
[root@master ~]# helm fetch stable/rabbitmq-ha
[root@master ~]# tar xf rabbitmq-ha-1.14.0.tgz
[root@master ~]# cd rabbitmq-ha/
[root@master rabbitmq-ha]# vim templates/statefulset.yaml
[root@master rabbitmq-ha]# head -1 templates/statefulset.yaml
apiVersion: apps/v1
[root@master rabbitmq-ha]# cat -n templates/statefulset.yaml
##14-17添加,匹配labels: 26-27
	13  spec:
    14    selector:
    15      matchLabels:
    16        app: {{ template "rabbitmq-ha.name" . }}
    17        release: {{ .Release.Name }}
    18    podManagementPolicy: {{ .Values.podManagementPolicy }}
    19    serviceName: {{ template "rabbitmq-ha.fullname" . }}-discovery
    20    replicas: {{ .Values.replicaCount }}
    21    updateStrategy:
    22      type: {{ .Values.updateStrategy }}
    23    template:
    24      metadata:
    25        labels:
    26          app: {{ template "rabbitmq-ha.name" . }}
    27          release: {{ .Release.Name }}
###这里先把副本改成1,因为是单node节点
[root@master memcached]# grep replicaCount:  values.yaml
replicaCount: 1
[root@master rabbitmq-ha]# helm install .
NOTES:
** Please be patient while the chart is being deployed **

  Credentials:

    Username      : guest
    Password      : $(kubectl get secret --namespace default wizened-tarsier-rabbitmq-ha -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)
    ErLang Cookie : $(kubectl get secret --namespace default wizened-tarsier-rabbitmq-ha -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)

helm install ./

在创建一个service.yaml

cat  service.yaml
apiVersion: v1
kind: Service
metadata:
  name: rabbitmq-management
  labels:
    app: rabbitmq-ha
spec:
  ports:
  - port: 15672
    name: http
  selector:
    app: rabbitmq-ha
  type: NodePort
kubectl apply -f service.yaml
kubectl  get  svc

在浏览器输入网址登录到rabbitmq的管理节点中

加密密码:

kubectl get secret --namespace default hipster-tarsier-rabbitmq-ha -o jsonpath="{.data.rabbitmq-password}" | base64 --decode

helm常用命令

(1)release相关的:

helm upgrade [RELEASE] [CHART] [flags] 升级一个版本

helm delete release 删除创建的release

helm rollback [flags] [RELEASE] [REVISION] 回滚一个版本

helm install . 创建一个release实例

helm history 查看历史

(2)chart相关的

helm serach

helm inspect 查看chart的详细信息

helm fetch 把chart下载下来

helm package 把chart打包

8.helm template语法

可以通过如下命令获取渲染后的yaml文件

cd rabbitmq-ha

helm install --debug --dry-run ./

你可能感兴趣的:(kubenetes,kubernetes,docker,容器)