helm的安装参考:
https://www.kubernetes.org.cn/4619.html
Helm由客户端命helm令行工具和服务端tiller组成,Helm的安装十分简单。 下载helm命令行工具到master节点node1的/usr/local/bin下,这里下载的2.9.1版本:
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
tar -zxvf helm-v2.11.0-linux-amd64.tar.gz
cd linux-amd64/
cp helm /usr/local/bin/
为了安装服务端tiller,还需要在这台机器上配置好kubectl工具和kubeconfig文件,确保kubectl工具可以在这台机器上访问apiserver且正常使用。 这里的node1节点以及配置好了kubectl。
因为Kubernetes APIServer开启了RBAC访问控制,所以需要创建tiller使用的service account: tiller并分配合适的角色给它。 详细内容可以查看helm文档中的Role-based Access Control。 这里简单起见直接分配cluster-admin这个集群内置的ClusterRole给它。创建rbac-config.yaml文件:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
kubectl create -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
安装tillerhelm init --service-account tiller --skip-refresh
问题
到这一步就出现问题了,跟之前参考的博主写的不一样了。因为我使用的是国内的docker源,所以gcr.io/kubernetes-helm/tiller这个镜像访问不到,所以查看pod的时候
kubectl get pods -n kube-system
显示:
NAME READY STATUS RESTARTS AGE
tiller-deploy-6f6fd74b68-rkk5w 0/1 ImagePullBackOff 0 14h
pod的状态不对啊。作为刚入门的小白,开始摸索解决
解决思路
1、查看pod的事件
kubectl describe pod tiller-deploy-6f6fd74b68-rkk5w -n kube-system
显示
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Failed 52m (x3472 over 14h) kubelet, test1 Error: ImagePullBackOff
Normal BackOff 2m6s (x3686 over 14h) kubelet, test1 Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.11.0"
显然是获取gcr.io/kubernetes-helm/tiller:v2.11.0镜像失败
2、手动拉取镜像
docker search kubernetes-helm/tiller
cockpit/kubernetes This container provides a version of cockpit… 41 [OK]
fluent/fluentd-kubernetes-daemonset Fluentd Daemonset for Kubernetes 24 [OK]
lachlanevenson/k8s-helm Helm client (https://github.com/kubernetes/h… 17
dtzar/helm-kubectl helm and kubectl running on top of alpline w… 16 [OK]
jessestuart/tiller Nightly multi-architecture (amd64, arm64, ar… 4 [OK]
hypnoglow/kubernetes-helm Image providing kubernetes kubectl and helm … 3 [OK]
linkyard/docker-helm Docker image containing kubernetes helm and … 3 [OK]
jimmysong/kubernetes-helm-tiller 2
ibmcom/tiller Docker Image for IBM Cloud private-CE (Commu… 1
zhaosijun/kubernetes-helm-tiller mirror from gcr.io/kubernetes-helm/tiller:v2… 1 [OK]
zlabjp/kubernetes-resource A Concourse resource for controlling the Kub… 1
thebeefcake/concourse-helm-resource concourse resource for managing helm deploym… 1 [OK]
timotto/rpi-tiller k8s.io/tiller for Raspberry Pi 1
fishead/gcr.io.kubernetes-helm.tiller mirror of gcr.io/kubernetes-helm/tiller 1 [OK]
victoru/concourse-helm-resource concourse resource for managing helm deploym… 0 [OK]
bitnami/helm-crd-controller Kubernetes controller for HelmRelease CRD 0 [OK]
z772458549/kubernetes-helm-tiller kubernetes-helm-tiller 0 [OK]
mnsplatform/concourse-helm-resource Concourse resource for helm deployments 0
croesus/kubernetes-helm-tiller kubernetes-helm-tiller 0 [OK]
这么多镜像,看描述,我看中了fishead/gcr.io.kubernetes-helm.tiller mirror of gcr.io/kubernetes-helm/tiller 1 [OK]
意思是fishead/gcr.io.kubernetes-helm.tiller 这个镜像是 根据
mirror of gcr.io/kubernetes-helm/tiller Build而成
接下来去dockerhub上确认下
dockerhub.jpg
果然是我们需要的镜像,然后查看版本:
tag.jpg
下载镜像:docker pull fishead/gcr.io.kubernetes-helm.tiller:v2.11.0
改tagdocker tag fishead/gcr.io.kubernetes-helm.tiller:v2.11.0 gcr.io/kubernetes-helm/tiller:v2.11.0
查看本地镜像
images.jpg
3、重新部署
萌新这步折腾了很久,参考网上方法,有试过
删除tillerhelm reset -f
初始化,重新部署tillerhelm init --service-account tiller --tiller-image gcr.io/kubernetes-helm/tiller:v2.11.0 --skip-refresh
查看pod,还是错误的状态kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
tiller-deploy-6f6fd74b68-qvlzx 0/1 ImagePullBackOff 0 8m43s
啊啊啊啊啊啊啊啊啊啊啊啊啊啊啊啊啊啊啊,崩溃了。为什么还是显示拉取镜像失败呢。(;′⌒`)
冷静下来想想,是不是配置文件中写了总是获取仓库镜像呢
编辑下配置文件kubectl edit deployment tiller-deploy -n kube-system
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: 2018-11-16T08:03:53Z
generation: 2
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
resourceVersion: "133136"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/tiller-deploy
uid: 291c2a71-e976-11e8-b6eb-8cec4b591b6a
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: helm
name: tiller
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
spec:
automountServiceAccountToken: true
containers:
- env:
- name: TILLER_NAMESPACE
value: kube-system
- name: TILLER_HISTORY_MAX
value: "0"
image: gcr.io/kubernetes-helm/tiller:v2.11.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /liveness
port: 44135
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: tiller
ports:
- containerPort: 44134
name: tiller
protocol: TCP
- containerPort: 44135
果然找到了镜像拉取策略:imagePullPolicy: IfNotPresent
看看官网怎么说的
https://kubernetes.io/docs/concepts/containers/images/
By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
#默认情况是会根据配置文件中的镜像地址去拉取镜像,如果设置为IfNotPresent 和Never就会使用本地镜像。
IfNotPresent :如果本地存在镜像就优先使用本地镜像。
Never:直接不再去拉取镜像了,使用本地的;如果本地不存在就报异常了
按道理来说,我这个配置没问题啊,为什么不先检索本地的镜像呢,难道是我后来下载的原因。不管了,我先改成neverimagePullPolicy:Never
保存下,查看pod状态tiller-deploy-f844bd879-p6m8x 1/1 Running 0 62s