类似于 Linux 的 YUM、APT,Helm 是 K8S 的包管理工具。
Helm, 一个二进制工具,用来安装、升级、卸载 K8S 中的应用程序。
Helm Chart,一个 tgz 包,类似安卓的 APK。
K8S 应用打包成 Chart,通过 Helm 安装到 K8S 集群中。
需要准备一套k8s的集群,helm主要是k8s集群的包管理器,主要用来管理helm中的各种chart包。
拉取代码->打包编译->构建镜像->准备一堆相关部署的yaml文件(如:deploymet、service、ingress等)->kubectl apply 部署到k8s集群。
1)随着引用的增多,需要维护大量的yaml文件
2)不能根据一套yaml文件来创建多个环境,需要手动进行修改。
例如:一般环境都分为dev、预生产、生产环境,部署完了dev这套环境,后面再部署预生产和生产环境,还需要复制出两套,并手动修改才行
Helm是Kubernetes 的包管理工具,可以方便地发现、共享和构建 Kubernetes 应用
helm是k8s的包管理器,相当于centos系统中的yum工具,可以将一个服务相关的所有资源信息整合到一个chart包中,并且可以使用一套资源发布到多个环境中, 可以将应用程序的所有资源和部署信息组合到单个部署包中。
就像Linux下的rpm包管理器,如yum/apt等,可以很方便的将之前打包好的yaml文件部署到kubernetes上。
就是helm的一个整合后的chart包,包含一个应用所有的kubermetes声明模版,类似于yum的rpm包或者apt的dpkg文件
理解:
helm将打包的应用程序部署到k8s,并将它们构建成Chart。这些Chart将所有预配置的应用程序资源以及所有版本都包含在一个易于管理的包中。
Helm把kubernetes资源(如deployments、 services或ingres等)打包到一个chart中,chart被保存到chart仓库。通过chart仓库可用来存储和分享chart。
helm的客户端组件,负责和k8s apiserver通信
用于发布和存储chart包的仓库,类似yum仓库或docker仓库
用chart包部署的一个实例。通过chart在k8s中部署的应用都会产生一个唯一的Release,同chart部署多次就会产生多个Release。
理解:
将这些yaml部署完成后,他也会记录部署时候的一个版本,维护了一个release版本状态,通过Release这个实例,他会具体帮我们创建pod.deployment等资源
helm2中helm客户端和k8s通信是通过Tiler组件和k8s通信,helm3移除了Tiler组件,直接使用kubeconfig文件和k8s的apiserver通信
#helm delete release-name --purge
------------>> helm uninstall release-name
#helm inspect release-name ------------>>helm show release-name
#helm fetch chart-name -------
---->> helm pull chart-name
#helm install ./mychart --generate-name
[root@k8s-master-136 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-136 Ready control-plane,master 261d v1.21.0
k8s-node-135 Ready 261d v1.21.0
k8s-node-137 Ready 261d v1.21.0
每个Helm 版本都提供了各种操作系统的二进制版本,这些版本可以手动下载和安装。
下载 需要的版本
wget https://repo.huaweicloud.com/helm/v3.5.4/helm-v3.5.4-linux-amd64.tar.gz
解压
tar -zxvf helm-v3.5.4-linux-amd64.tar.gz
在解压目录中找到helm程序,移动到需要的目录中
mv linux-amd64/helm /usr/local/bin/helm
然后就可以执行客户端程序并添加稳定仓库: helm help
# helm create mychart
[root@k8s-master-136 samve]# tree mychart
mychart #chart包的名称
├── charts #存放子chart的目录,目录里存放这个chart依赖的所有子chart
├── Chart.yaml #保存chat的基本信息,包括名字、描述信息及版本等,这个变量文件都可以被templates目录下文件所引用
├── templates #模板文件目录,目录里面存放所有yaml模板文件,包含了所有部署应用的yaml文件
│ ├── deployment.yaml #创建deployment对象的模板文件
│ ├── _helpers.tpl #放置模板助手的文件,可以在整个chat中重复使用,是放一些templates目录下这些vaml都有可能会用的一些模板
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── NOTES.txt #存放提示信息的文件,介绍chat帮助信息,helm instl部署后展示给用户,如何使用chat等,是部署chat后给用户的提示信息
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests #用于测试的文件,测试完部署完chat后,如web,做一个链接,看看你是否部署正常
│ └── test-connection.yaml
└── values.yaml #用于染模板的文件(变量文件,定义变量的值) 定义templates目录下的yaml文件可能引用到的变量
#values.yaml用于存储 templates 目录中模板文件中用到变量的值,这些变量定义都是为了让templates目录下yaml引用
[root@k8s-master-136 mychart]# cat Chart.yaml
apiVersion: v2
name: mychart
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
详细字段说明:
- apiVersion: # chat API 版本信息,通常是"v2"(必须) appVersion字段与version 字段并没有直接的联系。这是指定应用版本的一种方式;#对于部分仅支持helm3的chat,apiVersion应该指定为V2。对于可以同时支持helm3和helm2版本的chat,可以将其设置为V1
- name: # chart 的名称(必须)
- version: # chat 包的版本(必须),version 字段用于指定 chat 包的版本号,后续更新 chat 包时候也需要同步更新这个字段
- kubeVersion: # 指定kubernetes 版本(可选) 用于指定受支持的kubernetes版本,helm在安装时候会验证版本信息。
- type: # chat类型(可选)V2版新增了type 字段,用于区分chat类型,取值为application(默认)和library,如"应用类型chart"和"库类型chart”
- description: # 对项目的描述(可选)
使用helm create 命令即可创建一个chart,其中包含完整的目录结构:
[root@k8s-master-136 samve]# helm create mychart
[root@k8s-master-136 samve]# tree mychart
mychart
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── NOTES.txt
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
[root@k8s-master-136 samve]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-136 Ready control-plane,master 261d v1.21.0
k8s-node-135 Ready 261d v1.21.0
k8s-node-137 Ready 261d v1.21.0
1)创建一个chart包:
[root@k8s-master-136 samve]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-136 Ready control-plane,master 261d v1.21.0
k8s-node-135 Ready 261d v1.21.0
k8s-node-137 Ready 261d v1.21.0
[root@k8s-master-136 samve]# helm create mychart
Creating mychart
[root@k8s-master-136 samve]# cd mychart/
[root@k8s-master-136 mychart]# ls
charts Chart.yaml templates values.yaml
[root@k8s-master-136 mychart]# cd templates/
[root@k8s-master-136 templates]# ls
deployment.yaml _helpers.tpl hpa.yaml ingress.yaml NOTES.txt serviceaccount.yaml service.yaml tests
[root@k8s-master-136 templates]# rm -rf *
[root@k8s-master-136 templates]# vim configmap.yaml
configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: mychart-configmap
data:
myvalue: "hello world"
2)创建一个release实例:
[root@k8s-master-136 samve]# helm install myconfigmap ./mychart/
NAME: myconfigmap
LAST DEPLOYED: Sun Oct 15 16:25:07 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
3)查看创建后的相关信息和验证是否已经在k8s集群中创建了configmap
[root@k8s-master-136 samve]# helm get manifest myconfigmap
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mychart-configmap
data:
value: "hello world"
安装成功后,用helm get manifest release名 命令可以查看已经发布到k8s中的release信息。
[root@k8s-master-136 samve]# helm list |grep myconfigmap #list列出创建的release实例名
myconfigmap default 1 2023-10-15 16:25:07.264394056 +0800 CST deployed mychart-0.1.0 1.16.0
[root@k8s-master-136 samve]# kubectl get configmap |grep mychart-configmap #查看使用helm发布后创建的configmap
mychart-configmap 1 4m13s
4)删除release实例
使用 helm uninstal release实例名 的命今来删除这个release,删除的时候直接指定release 名称即可。
[root@k8s-master-136 samve]# helm uninstall myconfigmap #删除release实例,指定release的实例名: myconfigmap
release "myconfigmap" uninstalled
[root@k8s-master-136 samve]# helm list|grep myconfigmap
[root@k8s-master-136 samve]# kubectl get configmap | grep mychart-configmap
[root@k8s-master-136 samve]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-136 Ready control-plane,master 261d v1.21.0
k8s-node-135 Ready 261d v1.21.0
k8s-node-137 Ready 261d v1.21.0
1)创建一个chart包:
[root@k8s-master-136 samve]# helm create mychart
Creating mychart
[root@k8s-master-136 samve]# cd mychart/
[root@k8s-master-136 mychart]# ls
charts Chart.yaml templates values.yaml
[root@k8s-master-136 mychart]# cd templates/
[root@k8s-master-136 templates]# ls
deployment.yaml _helpers.tpl hpa.yaml ingress.yaml NOTES.txt serviceaccount.yaml service.yaml tests
[root@k8s-master-136 templates]# rm -rf *
[root@k8s-master-136 templates]# vim configmap.yaml
configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{.Release.Name}}-configmap #最前面的,从作用域最项层命名空间开始,即: 在顶层命名空间中开始查找Release对象,再查找Name对象
#上面就是通过内置对象获取获取内置对象的变量值(Release的名称)作为拼接成onfigmap的名字
data:
myvalue: {{.Values.MY_VALUE}}
> /home/samve/mychart/values.yaml
vim /home/samve/mychart/values.yaml
values.yaml
MY_VALUE: "Hello World"
引用内置对象变量或其他变量 (如values.yaml) 的好处:
如果metadata.name 中设置的值是一个固定值,这样的模板是无法在k8s中多次部署的,所以我们可以试着在每次安装chart时,都自动metadata.name的值设置为release的名称,因为每次部署时候release实例名是不一样的,这样部署时候,里面的资源名也就可以作为一个区分,而可以进行重复部署。
2)创建一个release实例:
使用helm安装一个release的实例,指定release实例名: myconfigmap,指定chart目录,/mychart
[root@k8s-master-136 samve]# helm install myconfigmap2 ./mychart
NAME: myconfigmap2
LAST DEPLOYED: Sun Oct 15 17:03:40 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
3)查看创建后的相关信息和验证是否已经在k8s集群中创建了configmap
[root@k8s-master-136 samve]# helm list |grep myconfigmap2
myconfigmap2 default 1 2023-10-15 17:03:40.877607951 +0800 CST deployed mychart-0.1.0 1.16.0
[root@k8s-master-136 samve]# kubectl get configmap | grep myconfigmap2
myconfigmap2-configmap 1 71s
4)删除release实例
使用 helm uninstall release实例名 的命令来删除这个release,删除的时候直接指定release 名称即可。
[root@k8s-master-136 samve]# helm uninstall myconfigmap2
release "myconfigmap2" uninstalled
[root@k8s-master-136 samve]# helm list | grep myconfigmap2
[root@k8s-master-136 samve]# kubectl get configmap | grep myconfigmap2
helm提供了一个用来渲染模板的命令,该命今可以将模板内容渲染出来,但是不会进行任何安装的操作。可以用该命今来测试模板渲染的内容是否正确。
用法: helm install release实例名 chart目录 --debug --dry-run
例:# helm install myconfigmap3 ./mychart/ --debug --dry-run
1)从加入到本地的chart官方仓库(从官方仓库安装)安装release实例
2)将从chart仓库拉下来的压缩包进行安装release实例(下载好的压缩包本地离线安装release)
3)将从chart仓库拉下来的压缩包解压后,从解压目录安装release实例(解压下载好的压缩包,从解压目录离线安装release实例)
4)从一个网络地址仓库压缩包直接安装release实例
#helm install db stable/mysql #从加入到本地的chart官方仓库(从官方仓库安装)安装release实例,db为release实例名
#helm install my-tomcat test-repo/tomcat #从加入到本地的chart社区仓库(从官方仓库安装)安装release实例,my-tomcat 为release实例名
#helm install db mysql-1.6.9.tgz #从chart仓库拉下来的压缩包进行安装release实例(从本地存档文件离线安装),db为release实例名
#helm install db mysql #从chart仓库拉下来的压缩包解压后,从解压目录安装release实例(从解压目录离线安装).db为release实例名
#helm instal db http:/ur ../mysa-1.6.9.taz #从一个网络地址仓库压缩包直接安装release实例(从下载服务器安装), db为release实例名
卸载release实例:
#helm uninstall release实例名
Release 对象
Values 对象
Chart 对象
Capabilities 对象
Template 对象
1)Release对象
Release.Name:release 的名称
ReleaseNamespace:release 的命名空间
Release.IsUpgrade:如果当前操作是升级或回滚的话,该值为 true
Release.IsInstall:如果当前操作是安装的话,该值为 true
Release.Revision:获取此次修订的版本号。初次安装时为 1,每次升级或回滚都会递增
Release.Service:获取渲染当前模板的服务名称。一般都是 Helm
2)Values 对象:描述的是value.yaml 文件(定义变量的文件)中的内容,默认为空。使用Value 对象可以获取到value.yaml文件中已定义的任何变量数值
Value 键值对 获取方式
name1: test1 .Values.name1
info:
name2: test2 .Values.info.name2
3)Chart 对象:用于获取Chart.yaml文件中的内容
.Chart,Name:获取Chart 的名称
.Chart.Version:获取Chart 的版本
4)Capabilities对象:提供了关于kubernetes 集群相关的信息。
该对象有如下方法:
.Capabilities.APIVersions:返回kubernetes集群 API 版本信息集合
.Capabilities.APIVersions.Has sversion:用于检测指定的版本或资源在k8s集群中是否可用,例如: apps/v1/Deployment
.Capabilities.KubeVersion和.Capabilities.KubeVersion.Version:都用于获取kubernetes 的版本号
.Capabilities.KubeVersion.Major:获取kubernetes 的主版本号
.Capabilities.KubeVersion.Minor:获取kubernetes 的小版本号
5)Template对象:用于获取当前模板的信息,它包含如下两个对象
.Template.Name:用于获取当前模板的名称和路径(例如: mchart/templates/mytemplate.yaml)
.Template.BasePath:用于获取当前模板的路径(例如: mychart/templates)
1)环境准备 k8s集群
[root@k8s-master-136 samve]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-136 Ready control-plane,master 261d v1.21.0
k8s-node-135 Ready 261d v1.21.0
k8s-node-137 Ready 261d v1.21.0
2)创建一个chart包(用helm3发布创建一个configmap,创建的k8s集群中,发布其他应用也一样)
[root@k8s-master-136 samve]# helm create mychart
Creating mychart
[root@k8s-master-136 samve]# cd mychart/
[root@k8s-master-136 mychart]# cd templates/
[root@k8s-master-136 templates]# rm -rf *
3)编写自己需要的yaml文件,调用上面各自内置对象获取相关变量的值
(1)调用Release对象:描述了版本发布自身的一些信息。
[root@k8s-master-136 templates]# > /home/samve/mychart/templates/configmap.yaml
[root@k8s-master-136 templates]# vim /home/samve/mychart/templates/configmap.yaml
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{.ReleaseName}} -configmap
namespace: .Release.Namespace
data:
value1: "{{.Release.IsUpgrade}}" #如果当前操作是升级或回滚的话,该值为true
value2: "{{ .Release.IsInstall}}" #如果当前操作是安装的话,该值为true
value3: "{{.Release.Revision}}" #获取此次修订的版本号
value4: "{{.Release.Service}}" #获取当前模板的服务名
[root@k8s-master-136 samve]# helm install myconfigmap1 ./mychart/ --debug --dry-run
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/samve/mychart
NAME: myconfigmap1
LAST DEPLOYED: Sun Oct 15 17:47:17 2023
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
autoscaling:
enabled: false
maxReplicas: 100
minReplicas: 1
targetCPUUtilizationPercentage: 80
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: nginx
tag: ""
imagePullSecrets: []
ingress:
annotations: {}
enabled: false
hosts:
- host: chart-example.local
paths:
- backend:
serviceName: chart-example.local
servicePort: 80
path: /
tls: []
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
replicaCount: 1
resources: {}
securityContext: {}
service:
port: 80
type: ClusterIP
serviceAccount:
annotations: {}
create: true
name: ""
tolerations: []
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
namespace: .Release.Namespace
data:
value1: "false" #如果当前操作是升级或回滚的话,该值为true
value2: "true" #如果当前操作是安装的话,该值为true
value3: "1" #获取此次修订的版本号
value4: "Helm" #获取当前模板的服务名
2)调用Values对象:描述的是value.yaml 文件 (定义变量的文件)中的内容
清空里面的初始化信息,设置成我们需要的(变量名和赋值) (里面默认的信息都是初始化信息,仅供参考
[root@k8s-master-136 samve]# > /home/samve/mychart/values.yaml
[root@k8s-master-136 samve]# vim /home/samve/mychart/values.yaml
name1: test1
info:
name2: test2
[root@k8s-master-136 samve]# > /home/samve/mychart/templates/configmap.yaml
[root@k8s-master-136 samve]# vim /home/samve/mychart/templates/configmap.yaml
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{.Release.Name}}-configmap
namespace: {{.Release.Namespace}}
data:
value1: "{{.Values.name1}}" #获取values.yaml文件中定义的变量的值
value2: "{{.Values.info.name2}}" #获取values.yaml文件中定义的层级变量的值
[root@k8s-master-136 samve]# helm install myconfigmap2 ./mychart/ --debug --dry-run #不真正执行,只是试运行看是否能运行
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/samve/mychart
NAME: myconfigmap2
LAST DEPLOYED: Sun Oct 15 18:35:36 2023
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
info:
name2: test2
name1: test1
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myconfigmap2-configmap
namespace: default
data:
value1: "test1" #获取values.yaml文件中定义的变量的值
value2: "test2" #获取values.yaml文件中定义的层级变量的值
(3)调用Chart 对象:用于获取Chart.yaml文件中的内容
[root@k8s-master-136 samve]# cat /home/samve/mychart/Chart.yaml | grep -vE "#|^s" #先查看下Chart.yaml文件中内容中定义的变量
apiVersion: v2
name: mychart
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"
[root@k8s-master-136 samve]# vim /home/samve/mychart/templates/configmap.yaml #编写一个自己需要的模板文件
apiVersion: v1
kind: ConfigMap
metadata:
name: {{.ReleaseName}}-configmap
namespace: {{.Release.Namespace}}
data:
value1: "{{.Chart.Name}}" #获取Chat的名称,获取Chart.yaml文件中定义的变量的值
value2: "{{.Chart.Version}}" #获取Chart的版本,获取Chart.yaml文件中定义的变量的值
helm install myconfigmap3 ./mychart/ --debug --dry-run
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/samve/mychart
NAME: myconfigmap3
LAST DEPLOYED: Sun Oct 15 18:23:30 2023
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
autoscaling:
enabled: false
maxReplicas: 100
minReplicas: 1
targetCPUUtilizationPercentage: 80
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: nginx
tag: ""
imagePullSecrets: []
ingress:
annotations: {}
enabled: false
hosts:
- host: chart-example.local
paths:
- backend:
serviceName: chart-example.local
servicePort: 80
path: /
tls: []
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
replicaCount: 1
resources: {}
securityContext: {}
service:
port: 80
type: ClusterIP
serviceAccount:
annotations: {}
create: true
name: ""
tolerations: []
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
namespace: default
data:
value1: "mychart" #获取Chat的名称,获取Chart.yaml文件中定义的变量的值
value2: "0.1.0" #获取Chart的版本,获取Chart.yaml文件中定义的变量的值
(4)调用Capabilities对象 提供了关于kubernetes 集群相关的信息。该对象有如下方法
[root@k8s-master-136 samve]# vim /home/samve/mychart/templates/configmap.yaml
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{.ReleaseName}} -configmap
namespace: .Release.Namespace
data:
value1: "{{.Capabilities.APIVersions}}" #返回kubernetes集群 API 版本信息集合
value2: "{{.Capabilities.KubeVersion.Version}}" #用于获取kubernetes的版本号
value3: "{{.Capabilities.KubeVersion.Major}}" #获取kubernetes 的主版本号
value4: "{{.Capabilities.KubeVersion.Minor}}" #获取kubernetes 的小版本号
[root@k8s-master-136 samve]# helm install myconfigmap4 ./mychart/ --debug --dry-run
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/samve/mychart
NAME: myconfigmap4
LAST DEPLOYED: Sun Oct 15 18:01:27 2023
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
autoscaling:
enabled: false
maxReplicas: 100
minReplicas: 1
targetCPUUtilizationPercentage: 80
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: nginx
tag: ""
imagePullSecrets: []
ingress:
annotations: {}
enabled: false
hosts:
- host: chart-example.local
paths:
- backend:
serviceName: chart-example.local
servicePort: 80
path: /
tls: []
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
replicaCount: 1
resources: {}
securityContext: {}
service:
port: 80
type: ClusterIP
serviceAccount:
annotations: {}
create: true
name: ""
tolerations: []
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
namespace: .Release.Namespace
data:
value1: "[discovery.k8s.io/v1beta1 extensions/v1beta1/Ingress storage.k8s.io/v1beta1/VolumeAttachment admissionregistration.k8s.io/v1beta1/ValidatingWebhookConfiguration authorization.k8s.io/v1beta1/SelfSubjectAccessReview crd.projectcalico.org/v1/IPAMHandle apiregistration.k8s.io/v1 autoscaling/v2beta2 certificates.k8s.io/v1beta1 v1/Endpoints v1/PersistentVolume certificates.k8s.io/v1 storage.k8s.io/v1/CSIDriver crd.projectcalico.org/v1/BGPPeer crd.projectcalico.org/v1/NetworkSet crd.projectcalico.org/v1/IPAMConfig rbac.authorization.k8s.io/v1/ClusterRole rbac.authorization.k8s.io/v1/Role admissionregistration.k8s.io/v1beta1/MutatingWebhookConfiguration coordination.k8s.io/v1beta1/Lease authorization.k8s.io/v1/SelfSubjectAccessReview storage.k8s.io/v1/CSINode storage.k8s.io/v1beta1/CSIStorageCapacity autoscaling/v2beta1 v1/Event v1/PodProxyOptions apps/v1/Scale networking.k8s.io/v1/NetworkPolicy crd.projectcalico.org/v1/NetworkPolicy events.k8s.io/v1/Event events.k8s.io/v1beta1/Event authorization.k8s.io/v1beta1/LocalSubjectAccessReview scheduling.k8s.io/v1/PriorityClass node.k8s.io/v1beta1/RuntimeClass v1/Service policy/v1beta1/PodSecurityPolicy crd.projectcalico.org/v1/IPReservation v1/ServiceAccount rbac.authorization.k8s.io/v1beta1/ClusterRole crd.projectcalico.org/v1/IPPool coordination.k8s.io/v1beta1 authorization.k8s.io/v1/SelfSubjectRulesReview networking.k8s.io/v1/IngressClass rbac.authorization.k8s.io/v1beta1/Role rbac.authorization.k8s.io/v1 v1/PodAttachOptions authorization.k8s.io/v1beta1/SelfSubjectRulesReview policy/v1/PodDisruptionBudget crd.projectcalico.org/v1/HostEndpoint apiextensions.k8s.io/v1beta1 apps/v1/ControllerRevision authorization.k8s.io/v1/SubjectAccessReview crd.projectcalico.org/v1/GlobalNetworkSet v1/Scale v1/ServiceProxyOptions apiregistration.k8s.io/v1/APIService apps/v1 batch/v1beta1 scheduling.k8s.io/v1beta1 v1/Node v1/PersistentVolumeClaim networking.k8s.io/v1beta1/IngressClass rbac.authorization.k8s.io/v1/RoleBinding discovery.k8s.io/v1/EndpointSlice discovery.k8s.io/v1beta1/EndpointSlice crd.projectcalico.org/v1/FelixConfiguration extensions/v1beta1 storage.k8s.io/v1/VolumeAttachment coordination.k8s.io/v1/Lease node.k8s.io/v1/RuntimeClass crd.projectcalico.org/v1/BlockAffinity apiextensions.k8s.io/v1 node.k8s.io/v1beta1 certificates.k8s.io/v1/CertificateSigningRequest authentication.k8s.io/v1 storage.k8s.io/v1beta1/StorageClass scheduling.k8s.io/v1beta1/PriorityClass crd.projectcalico.org/v1/BGPConfiguration v1 storage.k8s.io/v1beta1 v1/ConfigMap v1/ReplicationController batch/v1beta1/CronJob v1/Namespace authentication.k8s.io/v1/TokenReview crd.projectcalico.org/v1/CalicoNodeStatus batch/v1 networking.k8s.io/v1 policy/v1beta1 storage.k8s.io/v1beta1/CSIDriver apiextensions.k8s.io/v1beta1/CustomResourceDefinition authentication.k8s.io/v1beta1 v1/Secret storage.k8s.io/v1beta1/CSINode rbac.authorization.k8s.io/v1beta1/RoleBinding admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration events.k8s.io/v1beta1 v1/Eviction v1/PodExecOptions authentication.k8s.io/v1beta1/TokenReview rbac.authorization.k8s.io/v1beta1/ClusterRoleBinding storage.k8s.io/v1 admissionregistration.k8s.io/v1beta1 v1/PodTemplate scheduling.k8s.io/v1 v1/ResourceQuota apiregistration.k8s.io/v1beta1/APIService apps/v1/ReplicaSet flowcontrol.apiserver.k8s.io/v1beta1 batch/v1/Job policy/v1beta1/PodDisruptionBudget apps/v1/StatefulSet crd.projectcalico.org/v1/ClusterInformation crd.projectcalico.org/v1 v1/PodPortForwardOptions v1/TokenRequest apps/v1/DaemonSet apps/v1/Deployment networking.k8s.io/v1/Ingress rbac.authorization.k8s.io/v1/ClusterRoleBinding events.k8s.io/v1 coordination.k8s.io/v1 authorization.k8s.io/v1beta1/SubjectAccessReview autoscaling/v1/HorizontalPodAutoscaler batch/v1/CronJob crd.projectcalico.org/v1/GlobalNetworkPolicy authorization.k8s.io/v1beta1 v1/Binding v1/NodeProxyOptions autoscaling/v2beta1/HorizontalPodAutoscaler networking.k8s.io/v1beta1/Ingress crd.projectcalico.org/v1/IPAMBlock crd.projectcalico.org/v1/KubeControllersConfiguration apiregistration.k8s.io/v1beta1 v1/ComponentStatus authorization.k8s.io/v1/LocalSubjectAccessReview apiextensions.k8s.io/v1/CustomResourceDefinition flowcontrol.apiserver.k8s.io/v1beta1/PriorityLevelConfiguration admissionregistration.k8s.io/v1/MutatingWebhookConfiguration flowcontrol.apiserver.k8s.io/v1beta1/FlowSchema node.k8s.io/v1 discovery.k8s.io/v1 v1/Pod authorization.k8s.io/v1 autoscaling/v1 networking.k8s.io/v1beta1 policy/v1 rbac.authorization.k8s.io/v1beta1 autoscaling/v2beta2/HorizontalPodAutoscaler certificates.k8s.io/v1beta1/CertificateSigningRequest storage.k8s.io/v1/StorageClass admissionregistration.k8s.io/v1 v1/LimitRange]" #返回kubernetes集群 API 版本信息集合
value2: "v1.21.0" #用于获取kubernetes的版本号
value3: "1" #获取kubernetes 的主版本号
value4: "21" #获取kubernetes 的小版本号
(5).调用Template对象:用于获取当前模板的信息,它包含如下两个对象
[root@k8s-master-136 samve]# > /home/samve/mychart/templates/configmap.yaml
[root@k8s-master-136 samve]# vim /home/samve/mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{.Release.Name}}-configmap
namespace: {{.Release.Namespace}}
data:
value1: "{{.Template.Name}}" #用于获取当前模板的名称和路径(例如: mychart/templates/configmap.yaml)
value2: "{{.Template.BasePath}}" #用于获取当前模板的路径(例如: mychart/templates)
[root@k8s-master-136 samve]# helm install myconfigmap5 ./mychart/ --debug --dry-run #不真正执行,只是试运行看是否能运行
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/samve/mychart
NAME: myconfigmap5
LAST DEPLOYED: Sun Oct 15 18:41:35 2023
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
info:
name2: test2
name1: test1
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myconfigmap5-configmap
namespace: default
data:
value1: "mychart/templates/configmap.yaml" #用于获取当前模板的名称和路径(例如: mychart/templates/configmap.yaml)
value2: "mychart/templates" #用于获取当前模板的路径(例如: mychart/templates)
- version:查看helm客户端版本
- repo:添加、列出、移除、更新和索引chat仓库,可用子命令:add、index、list、remove、update
- search:根据关键字搜索chart包
- show:查看chart包的基本信息和详细信息,可用子命令: all、chart、readme、values
- pull:从远程仓库中下载拉取chat包并解压到本地,如: # helm pull testrepo/tomcat --version 0.4.3 --untar ,untar是解压,不加就是压缩包
- create:创建个chart包并指定chart包名字
- install:通过chart包安装一个release实例
- list:列出release实例名
- upgrade:更新一个release实例
- rollback:从之前版本回滚release实例,也可指定要回滚的版本号
- uninstall:卸载一个release实例
- history:获取release历史,用法: helm history release实例名
- package:将chart目录打包成chart存档文件中,例如: 假如我们修改 chat 后,需要将其进打包。# helm package /opt/helm/work/tomcat (chart的目录路径)
- get:下载一个release,可用子命令: all、hooks、manifest、notes、values
- status:显示release实例名的状态,显示已命名版本的状态
添加仓库:
可以添加多个仓库,添加仓库时候,记得起个仓库名,如: stable,aliyun,或其他,一般起个稳定版的stable会优先使用
# helm repo add stable http://mirrorazure.cn/kubernetes/charts #添加微软的,强烈推荐
# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts #添加阿里云的
# helm repo add test-repo http://mirror.kaiyuanshe.cn/kubernetes/charts #添加开源社区的
# helm repo list #列出仓库
微软仓库(http://mirror.azure.cn/kubernetes/charts/)
阿里云仓库(https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts )
推荐仓库(https://charts.bitnami.com/bitnami )
添加仓库 :helm repo add 自定义仓库名 仓库地址 (仓库地址就是上面几个网站地址)
[root@k8s-master-136 samve]# helm repo add apphub https://charts.bitnami.com/bitnami
删除仓库 : helm repo remove 仓库名
查看仓库 : helm repo list
搜索仓库的chart: helm search repo chart名称
更新仓库:
# helm repo update #更新仓库,能更新添加的所有仓库
# helm create mychart #创建一个chart包,chart包名为: mychart
(1)搜索chart包格式: helm search repo chart包名
示例:
# helm search repo tomcat
(2).查看chart包格式:
helm show chart chart包名或 helm show values chart包名 (查看详细信息)
示例:
# helm show chart stable/tomcat #查看chart包基础信息
# helm show values stable/tomcat #查看chart包详细信息
(3).拉取chart包格式:
# helm pull 远程仓库cart包名 --version 0.4.3 --untar #从远程仓库拉取指定版本的chart包到本地并解压,-untar是解压,不加就是压缩包
# helm pull 远程仓库chart包名 --untar #从远程仓库拉取最新版本的chart包到本地并解压,-untar是解压,不加就是压缩包
1).从加入到本地的chart官方仓库(从官方仓库在线安装)安装release实例
2).将从chart仓库拉下来的压缩包进行安装release实例 (下载好的压缩包本地离线安装release)
3),将从chat仓库拉下来的压缩包解压后,从解压目录安装release实例(解压下载好的压缩包,从解压目录离线安装release实例)
4).从一个网络地址(如http服务器) 仓库压缩包直接安装release实例
5).在本地创建一个chat包,通过自定义编辑自己的yaml文件后,通过本地chart包进行安装release
#helm search repo tomcat
#helm install tomcat1 stable/tomcat #从加入到本地的chart社区仓库(从官方仓库在线安装)安装release实例,tomcat1为release实例名
#helm install tomcat2 tomcat-0.4.3.tgz #从chart仓库拉下来的压缩包进行安装release实例(从本地存档文件离线安装),
#helm install tomcat3 tomcat #从chart仓库拉下来的压缩包解压后,从解压目录安装release实例(从解压目录离线安装),
#he m install dhb htp://ur.../mysa-1.6.9.toz #从一个网络地址库乐缩包直接安装reease实例(从下载服务器安装),db为release实例名
卸载release实例:
#helm uninstall release实例名
install:安装release实例 (实际就是k8s应用的安装)
upgrade:升级release实例 (实际就是k8s应用的升级)
rollback:回滚release实例 (实际就是k8s应用的回滚)
install:安装release实例 (实际就是k8s应用的安装)
upgrade:升级release实例 (实际就是k8s应用的升级)
rollback:回滚release实例 (实际就是k8s应用的回滚)
安装release实例:(上面已经介绍了从多种角度进行安装release实例,下面我们以其中一种进行安装演示)
- # helm create chart名 #指定chart名,创建一个chart包,通过对chat里template的yaml文件自定义编写成自己需要的后再进行安装
- # helm install release实例名 chart目录路径 #指定release实例名和chart包用录路径进行安装
release实例
示例:
- # helm create mychart #创建一个chart包,chart包名为: mychart
- # helm install test-release ./mychart #指定release实例名和chart包目录路径进行安装release实例
升级release实例:
- # helm upgrade release实例名 chart名 --set imageTag=1.19 #指定release名和chart名进行相关set设置的升级
- # helm upgrade release实例名 chart名 -f /.../mychart/values.yaml #指定release示例名和chart名和values.yaml文件升级
示例:
- # helm upgrade test-release-nginx mychart --set imageTag=1.19 #指定release实例名和chart名set升级
- # helm upgrade test-release-nginx mychat -f/root/hem//mychart/values,yaml #指定release示例名和chart名和values.yaml文件升级
回滚release实例:
- # helm rollback release实例名 #指定release实例名,回滚到上一个版本
- # helm rollback release实例名 #指定release实例名,回滚到指定版本,注意版本号是release的版本号,不是镜像版本号
示例:
# helm rollback web-nginx
# helm rollback web-nginx 1.17.10
获取release实例历史:
# helm history release实例名
示例:
# helm history test #test为release实例名
卸载release实例:
#helm uninstall release实例名
示例:
#helm uninstall test-release-nginx #uninstall 直接跟release名,卸载release实例
以部署nginx服务为例,其他应用都类似
1).准备环境 k8s集群
[root@k8s-master-136 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-136 Ready control-plane,master 261d v1.21.0
k8s-node-135 Ready 261d v1.21.0
k8s-node-137 Ready 261d v1.21.0
2).创建一个模板的chart包,删除原来的内容,自定义成我们自己需要的内容,后面我们自定义部署的yaml文件
[root@k8s-master-136 ~]# helm create nginx-chart
Creating nginx-chart
自定义部署的模板yaml文件:
vim templates/nginx-deploy-service.yaml #自定义需要的yaml模板文件,deployment和svc,通过nodeport暴露
nginx-deploy-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Values.deployment_name}}
spec:
replicas: {{.Values.replicas}}
selector:
matchLabels:
app: {{.Values.pod_label}}
template:
metadata:
labels:
app: {{.Values.pod_label}}
spec:
containers:
- image: {{.Values.image}}:{{.Values.imageTag}}
name: {{.Values.container_name}}
ports:
- containerPort: {{.Values.containerport}}
---
apiVersion: v1
kind: Service
metadata:
name: {{.Values.service_name}}
namespace: {{.Values.namespace}}
spec:
type: NodePort
ports:
- port: {{.Values.port}}
targetPort: {{.Values.targetport}}
nodePort: {{.Values.nodeport}}
protocol: TCP
selector:
app: {{.Values.pod_label}}
[root@k8s-master-136 nginx-chart]# vim values.yaml
deployment_name: nginx-deployment
replicas: 2
pod_label: nginx-pod-label
image: nginx
imageTag: 1.17
container_name: nginx-container
service_name: nginx-service
namespace: default
port: 80
targetport: 80
containerport: 80
nodeport: 30001
3)通过chart包安装一个release实例 (即部署一个k8s的nginx应用,版本为:1.17)
[root@k8s-master-136 samve]# helm install nginx-rerease-6 ./nginx-chart/
NAME: nginx-rerease-6
LAST DEPLOYED: Tue Oct 17 22:27:36 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
[root@k8s-master-136 samve]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-nginx default 1 2023-10-12 22:16:31.191334159 +0800 CST deployed nginx-15.3.3 1.25.2
myconfigriap2 default 1 2023-10-15 16:55:28.122607196 +0800 CST deployed mychart-0.1.0 1.16.0
ng-ingress default 1 2023-10-13 21:45:55.931561498 +0800 CST deployed nginx-ingress-controller-9.9.0 1.9.0
nginx default 1 2023-10-12 21:52:49.36634682 +0800 CST deployed nginx-intel-2.1.15 0.4.9
nginx-rerease default 1 2023-10-15 22:19:44.65435695 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-0 default 1 2023-10-15 22:43:00.564071169 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-3 default 1 2023-10-15 22:42:18.473364862 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-6 default 1 2023-10-17 22:27:36.335540732 +0800 CST deployed nginx-chart-0.1.0 1.16.0
nginx-rerease2 default 1 2023-10-15 22:39:03.4086732 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease3 default 1 2023-10-15 22:41:08.782652045 +0800 CST failed nginx-chart-0.1.0 1.16.0
web default 1 2023-10-12 22:55:28.525512768 +0800 CST deployed mychart-0.1.0 1.16.0
[root@k8s-master-136 samve]# kubectl get pod,svc,ep
NAME READY STATUS RESTARTS AGE
pod/my-nginx-8568bb4694-gn5b7 1/1 Running 4 5d
pod/ng-ingress-nginx-ingress-controller-65b5bc8846-6rtfc 1/1 Running 4 4d
pod/ng-ingress-nginx-ingress-controller-default-backend-6d5bc97bt5v 1/1 Running 3 4d
pod/nginx-2-69947fd9df-plct2 1/1 Running 6 7d
pod/nginx-6799fc88d8-2jh5z 1/1 Running 6 7d
pod/nginx-deployment-5c8469b67f-24pq9 1/1 Running 0 4m50s
pod/nginx-deployment-5c8469b67f-2j9s7 1/1 Running 0 4m50s
pod/nginx-nginx-intel-668b58fb4b-8cm6z 0/1 ImagePullBackOff 0 5d
pod/nginx1-b7fb675cb-rhtvn 0/1 CrashLoopBackOff 157 7d
pod/nginx2-74ff6c9fbc-2gb7r 0/1 CrashLoopBackOff 156 7d
pod/web-mychart-5f94885968-8hmz5 1/1 Running 4 4d23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.10.0.1 443/TCP 263d
service/my-nginx LoadBalancer 10.10.200.7 80:32374/TCP 5d
service/ng-ingress-nginx-ingress-controller LoadBalancer 10.10.129.100 80:31660/TCP,443:30713/TCP 4d
service/ng-ingress-nginx-ingress-controller-default-backend ClusterIP 10.10.184.181 80/TCP 4d
service/nginx NodePort 10.10.183.194 80:30111/TCP 263d
service/nginx-nginx-intel LoadBalancer 10.10.72.168 80:32756/TCP,443:30799/TCP 5d
service/nginx-service NodePort 10.10.78.52 80:30001/TCP 4m50s
service/web-mychart ClusterIP 10.10.155.71 80/TCP 4d23h
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.85.136:6443 263d
endpoints/my-nginx 10.18.112.63:8080 5d
endpoints/ng-ingress-nginx-ingress-controller 10.18.185.90:443,10.18.185.90:80 4d
endpoints/ng-ingress-nginx-ingress-controller-default-backend 10.18.112.22:8080 4d
endpoints/nginx 10.18.185.102:80 263d
endpoints/nginx-nginx-intel 5d
endpoints/nginx-service 10.18.112.23:80,10.18.185.88:80 4m50s
endpoints/web-mychart 10.18.185.112:80 4d23h
[root@k8s-master-136 samve]# kubectl get pod nginx-deployment-5c8469b67f-24pq9 -o yaml | grep image: #查看部署pod的镜像版本
- image: nginx:1.17
image: nginx:1.17
4)升级release实例版本 (将nginx版本1.17升级为1.20.0)
[root@k8s-master-136 samve]# vim nginx-chart/values.yaml
deployment_name: nginx-deployment
replicas: 2
pod_label: nginx-pod-label
image: nginx
imageTag: 1.20
container_name: nginx-container
service_name: nginx-service
namespace: default
port: 80
targetport: 80
containerport: 80
nodeport: 30001
[root@k8s-master-136 samve]# helm upgrade nginx-rerease-6 nginx-chart -f /home/samve/nginx-chart/values.yaml #指定release实例名和chart名和values.yaml文件升级
Release "nginx-rerease-6" has been upgraded. Happy Helming!
NAME: nginx-rerease-6
LAST DEPLOYED: Tue Oct 17 22:40:59 2023
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
[root@k8s-master-136 samve]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-nginx default 1 2023-10-12 22:16:31.191334159 +0800 CST deployed nginx-15.3.3 1.25.2
myconfigriap2 default 1 2023-10-15 16:55:28.122607196 +0800 CST deployed mychart-0.1.0 1.16.0
ng-ingress default 1 2023-10-13 21:45:55.931561498 +0800 CST deployed nginx-ingress-controller-9.9.0 1.9.0
nginx default 1 2023-10-12 21:52:49.36634682 +0800 CST deployed nginx-intel-2.1.15 0.4.9
nginx-rerease default 1 2023-10-15 22:19:44.65435695 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-0 default 1 2023-10-15 22:43:00.564071169 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-3 default 1 2023-10-15 22:42:18.473364862 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-6 default 2 2023-10-17 22:40:59.155113485 +0800 CST deployed nginx-chart-0.1.0 1.16.0
nginx-rerease2 default 1 2023-10-15 22:39:03.4086732 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease3 default 1 2023-10-15 22:41:08.782652045 +0800 CST failed nginx-chart-0.1.0 1.16.0
web default 1 2023-10-12 22:55:28.525512768 +0800 CST deployed mychart-0.1.0 1.16.0
[root@k8s-master-136 samve]# kubectl get pod,svc,ep
NAME READY STATUS RESTARTS AGE
pod/my-nginx-8568bb4694-gn5b7 1/1 Running 4 5d
pod/ng-ingress-nginx-ingress-controller-65b5bc8846-6rtfc 1/1 Running 4 4d
pod/ng-ingress-nginx-ingress-controller-default-backend-6d5bc97bt5v 1/1 Running 3 4d
pod/nginx-2-69947fd9df-plct2 1/1 Running 6 7d
pod/nginx-6799fc88d8-2jh5z 1/1 Running 6 7d
pod/nginx-deployment-5c8469b67f-24pq9 1/1 Running 0 18m
pod/nginx-deployment-5c8469b67f-2j9s7 1/1 Running 0 18m
pod/nginx-deployment-7fb7865bcf-fpr6q 0/1 ImagePullBackOff 0 4m52s
pod/nginx-nginx-intel-668b58fb4b-8cm6z 0/1 ImagePullBackOff 0 5d
pod/nginx1-b7fb675cb-rhtvn 0/1 CrashLoopBackOff 159 7d
pod/nginx2-74ff6c9fbc-2gb7r 0/1 CrashLoopBackOff 158 7d
pod/web-mychart-5f94885968-8hmz5 1/1 Running 4 4d23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.10.0.1 443/TCP 264d
service/my-nginx LoadBalancer 10.10.200.7 80:32374/TCP 5d
service/ng-ingress-nginx-ingress-controller LoadBalancer 10.10.129.100 80:31660/TCP,443:30713/TCP 4d
service/ng-ingress-nginx-ingress-controller-default-backend ClusterIP 10.10.184.181 80/TCP 4d
service/nginx NodePort 10.10.183.194 80:30111/TCP 264d
service/nginx-nginx-intel LoadBalancer 10.10.72.168 80:32756/TCP,443:30799/TCP 5d
service/nginx-service NodePort 10.10.78.52 80:30001/TCP 18m
service/web-mychart ClusterIP 10.10.155.71 80/TCP 4d23h
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.85.136:6443 264d
endpoints/my-nginx 10.18.112.63:8080 5d
endpoints/ng-ingress-nginx-ingress-controller 10.18.185.90:443,10.18.185.90:80 4d
endpoints/ng-ingress-nginx-ingress-controller-default-backend 10.18.112.22:8080 4d
endpoints/nginx 10.18.185.102:80 264d
endpoints/nginx-nginx-intel 5d
endpoints/nginx-service 10.18.112.23:80,10.18.185.88:80 18m
endpoints/web-mychart 10.18.185.112:80 4d23h
[root@k8s-master-136 samve]# kubectl get pod nginx-deployment-5c8469b67f-24pq9 -o yaml | grep image:
- image: nginx:1.17
image: nginx:1.17
5)回滚release实例版本 (将nginx版本1千0.回滚为1.17)
a)回滚到上一个版本:
[root@k8s-master-136 samve]# helm rollback nginx-rerease-6
Rollback was a success! Happy Helming!
[root@k8s-master-136 samve]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-nginx default 1 2023-10-12 22:16:31.191334159 +0800 CST deployed nginx-15.3.3 1.25.2
myconfigriap2 default 1 2023-10-15 16:55:28.122607196 +0800 CST deployed mychart-0.1.0 1.16.0
ng-ingress default 1 2023-10-13 21:45:55.931561498 +0800 CST deployed nginx-ingress-controller-9.9.0 1.9.0
nginx default 1 2023-10-12 21:52:49.36634682 +0800 CST deployed nginx-intel-2.1.15 0.4.9
nginx-rerease default 1 2023-10-15 22:19:44.65435695 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-0 default 1 2023-10-15 22:43:00.564071169 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-3 default 1 2023-10-15 22:42:18.473364862 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-6 default 4 2023-10-17 22:51:47.58210453 +0800 CST deployed nginx-chart-0.1.0 1.16.0
nginx-rerease2 default 1 2023-10-15 22:39:03.4086732 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease3 default 1 2023-10-15 22:41:08.782652045 +0800 CST failed nginx-chart-0.1.0 1.16.0
web default 1 2023-10-12 22:55:28.525512768 +0800 CST deployed mychart-0.1.0 1.16.0
[root@k8s-master-136 samve]# kubectl get pod,svc,ep #回滚后查看
NAME READY STATUS RESTARTS AGE
pod/my-nginx-8568bb4694-gn5b7 1/1 Running 4 5d
pod/ng-ingress-nginx-ingress-controller-65b5bc8846-6rtfc 1/1 Running 4 4d1h
pod/ng-ingress-nginx-ingress-controller-default-backend-6d5bc97bt5v 1/1 Running 3 4d1h
pod/nginx-2-69947fd9df-plct2 1/1 Running 6 7d
pod/nginx-6799fc88d8-2jh5z 1/1 Running 6 7d
pod/nginx-deployment-5c8469b67f-24pq9 1/1 Running 0 24m
pod/nginx-deployment-5c8469b67f-2j9s7 1/1 Running 0 24m
pod/nginx-deployment-7fb7865bcf-fpr6q 0/1 ImagePullBackOff 0 11m
pod/nginx-nginx-intel-668b58fb4b-8cm6z 0/1 ImagePullBackOff 0 5d
pod/nginx1-b7fb675cb-rhtvn 0/1 CrashLoopBackOff 160 7d
pod/nginx2-74ff6c9fbc-2gb7r 0/1 CrashLoopBackOff 159 7d
pod/web-mychart-5f94885968-8hmz5 1/1 Running 4 4d23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.10.0.1 443/TCP 264d
service/my-nginx LoadBalancer 10.10.200.7 80:32374/TCP 5d
service/ng-ingress-nginx-ingress-controller LoadBalancer 10.10.129.100 80:31660/TCP,443:30713/TCP 4d1h
service/ng-ingress-nginx-ingress-controller-default-backend ClusterIP 10.10.184.181 80/TCP 4d1h
service/nginx NodePort 10.10.183.194 80:30111/TCP 264d
service/nginx-nginx-intel LoadBalancer 10.10.72.168 80:32756/TCP,443:30799/TCP 5d
service/nginx-service NodePort 10.10.78.52 80:30001/TCP 24m
service/web-mychart ClusterIP 10.10.155.71 80/TCP 4d23h
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.85.136:6443 264d
endpoints/my-nginx 10.18.112.63:8080 5d
endpoints/ng-ingress-nginx-ingress-controller 10.18.185.90:443,10.18.185.90:80 4d1h
endpoints/ng-ingress-nginx-ingress-controller-default-backend 10.18.112.22:8080 4d1h
endpoints/nginx 10.18.185.102:80 264d
endpoints/nginx-nginx-intel 5d
endpoints/nginx-service 10.18.112.23:80,10.18.185.88:80 24m
endpoints/web-mychart 10.18.185.112:80 4d23h
b)回滚到指定版本
[root@k8s-master-136 samve]# helm rollback nginx-rerease-6 2
Rollback was a success! Happy Helming!
[root@k8s-master-136 samve]# kubectl get pod,svc,ep
NAME READY STATUS RESTARTS AGE
pod/my-nginx-8568bb4694-gn5b7 1/1 Running 4 5d
pod/ng-ingress-nginx-ingress-controller-65b5bc8846-6rtfc 1/1 Running 4 4d1h
pod/ng-ingress-nginx-ingress-controller-default-backend-6d5bc97bt5v 1/1 Running 3 4d1h
pod/nginx-2-69947fd9df-plct2 1/1 Running 6 7d
pod/nginx-6799fc88d8-2jh5z 1/1 Running 6 7d
pod/nginx-deployment-5c8469b67f-24pq9 1/1 Running 0 27m
pod/nginx-deployment-5c8469b67f-2j9s7 1/1 Running 0 27m
pod/nginx-deployment-7fb7865bcf-fpr6q 0/1 ImagePullBackOff 0 14m
pod/nginx-nginx-intel-668b58fb4b-8cm6z 0/1 ImagePullBackOff 0 5d1h
pod/nginx1-b7fb675cb-rhtvn 0/1 CrashLoopBackOff 161 7d
pod/nginx2-74ff6c9fbc-2gb7r 0/1 CrashLoopBackOff 160 7d
pod/web-mychart-5f94885968-8hmz5 1/1 Running 4 4d23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.10.0.1 443/TCP 264d
service/my-nginx LoadBalancer 10.10.200.7 80:32374/TCP 5d
service/ng-ingress-nginx-ingress-controller LoadBalancer 10.10.129.100 80:31660/TCP,443:30713/TCP 4d1h
service/ng-ingress-nginx-ingress-controller-default-backend ClusterIP 10.10.184.181 80/TCP 4d1h
service/nginx NodePort 10.10.183.194 80:30111/TCP 264d
service/nginx-nginx-intel LoadBalancer 10.10.72.168 80:32756/TCP,443:30799/TCP 5d1h
service/nginx-service NodePort 10.10.78.52 80:30001/TCP 27m
service/web-mychart ClusterIP 10.10.155.71 80/TCP 4d23h
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.85.136:6443 264d
endpoints/my-nginx 10.18.112.63:8080 5d
endpoints/ng-ingress-nginx-ingress-controller 10.18.185.90:443,10.18.185.90:80 4d1h
endpoints/ng-ingress-nginx-ingress-controller-default-backend 10.18.112.22:8080 4d1h
endpoints/nginx 10.18.185.102:80 264d
endpoints/nginx-nginx-intel 5d1h
endpoints/nginx-service 10.18.112.23:80,10.18.185.88:80 27m
endpoints/web-mychart 10.18.185.112:80 4d23h
[root@k8s-master-136 samve]# kubectl get pod nginx-deployment-5c8469b67f-24pq9 -o yaml | grep image:
- image: nginx:1.17
image: nginx:1.17
6)卸载删除release实例
[root@k8s-master-136 samve]# helm uninstall nginx-rerease-6
release "nginx-rerease-6" uninstalled
[root@k8s-master-136 samve]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-nginx default 1 2023-10-12 22:16:31.191334159 +0800 CST deployed nginx-15.3.3 1.25.2
myconfigriap2 default 1 2023-10-15 16:55:28.122607196 +0800 CST deployed mychart-0.1.0 1.16.0
ng-ingress default 1 2023-10-13 21:45:55.931561498 +0800 CST deployed nginx-ingress-controller-9.9.0 1.9.0
nginx default 1 2023-10-12 21:52:49.36634682 +0800 CST deployed nginx-intel-2.1.15 0.4.9
nginx-rerease default 1 2023-10-15 22:19:44.65435695 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-0 default 1 2023-10-15 22:43:00.564071169 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease-3 default 1 2023-10-15 22:42:18.473364862 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease2 default 1 2023-10-15 22:39:03.4086732 +0800 CST failed nginx-chart-0.1.0 1.16.0
nginx-rerease3 default 1 2023-10-15 22:41:08.782652045 +0800 CST failed nginx-chart-0.1.0 1.16.0
web default 1 2023-10-12 22:55:28.525512768 +0800 CST deployed mychart-0.1.0 1.16.0
使用搜索命令搜索应用
helm search repo 应用名称
[root@k8s-master-136 samve]# helm search repo nginx
NAME CHART VERSION APP VERSION DESCRIPTION
apphub/nginx 15.3.3 1.25.2 NGINX Open Source is a web server that can be a...
apphub/nginx-ingress-controller 9.9.0 1.9.0 NGINX Ingress Controller is an Ingress controll...
apphub/nginx-intel 2.1.15 0.4.9 DEPRECATED NGINX Open Source for Intel is a lig...
根据搜索内容选择安装
helm install 安装后应用的名称 搜索之后应用的名称
[root@k8s-master-136 samve]# helm install my-nginx apphub/nginx
NAME: my-nginx
LAST DEPLOYED: Thu Oct 12 22:16:31 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 15.3.3
APP VERSION: 1.25.2
** Please be patient while the chart is being deployed **
NGINX can be accessed through the following DNS name from within your cluster:
my-nginx.default.svc.cluster.local (port 80)
To access NGINX from outside the cluster, follow the steps below:
1. Get the NGINX URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace default -w my-nginx'
export SERVICE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].port}" services my-nginx)
export SERVICE_IP=$(kubectl get svc --namespace default my-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "http://${SERVICE_IP}:${SERVICE_PORT}"
查看安装之后的状态
helm list
[root@k8s-master-136 samve]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-nginx default 1 2023-10-12 22:16:31.191334159 +0800 CST deployed nginx-15.3.3 1.25.2
nginx default 1 2023-10-12 21:52:49.36634682 +0800 CST deployed nginx-intel-2.1.15 0.4.9
helm status 安装之后应用的名称
[root@k8s-master-136 samve]# helm status my-nginx
NAME: my-nginx
LAST DEPLOYED: Thu Oct 12 22:16:31 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 15.3.3
APP VERSION: 1.25.2
** Please be patient while the chart is being deployed **
NGINX can be accessed through the following DNS name from within your cluster:
my-nginx.default.svc.cluster.local (port 80)
To access NGINX from outside the cluster, follow the steps below:
1. Get the NGINX URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace default -w my-nginx'
export SERVICE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].port}" services my-nginx)
export SERVICE_IP=$(kubectl get svc --namespace default my-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "http://${SERVICE_IP}:${SERVICE_PORT}"
当然我们也可以通过kubectl命令查看相关的pod是否创建成功
[root@k8s-master-136 samve]# kubectl get svc,pod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.10.0.1 443/TCP 258d
service/my-nginx LoadBalancer 10.10.200.7 80:32374/TCP 9m37s
service/nginx NodePort 10.10.183.194 80:30111/TCP 258d
service/nginx-nginx-intel LoadBalancer 10.10.72.168 80:32756/TCP,443:30799/TCP 33m
NAME READY STATUS RESTARTS AGE
pod/my-nginx-8568bb4694-gn5b7 1/1 Running 0 9m37s
pod/nginx-2-69947fd9df-plct2 1/1 Running 2 47h
pod/nginx-6799fc88d8-2jh5z 1/1 Running 2 47h
pod/nginx-nginx-intel-668b58fb4b-8cm6z 0/1 ImagePullBackOff 0 33m
pod/nginx1-b7fb675cb-rhtvn 0/1 CrashLoopBackOff 50 47h
pod/nginx2-74ff6c9fbc-2gb7r 0/1 CrashLoopBackOff 50 47h