按文档步骤执行应该不会错
https://v2-1.docs.kubesphere.io/docs/zh-CN/installation/prerequisites/
版本1.17.3->helm-v2.6.12…,版本自己去文档找
wget -O helm.tar.gz https://get.helm.sh/helm-v2.16.2-linux-amd64.tar.gz
tar -zxvf helm-v2.16.2-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm
helm
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
执行:
kubectl apply -f helm_rbac.yaml
修改镜像
helm repo list
helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
执行:
helm init --service-account=tiller --tiller-image=registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3 --history-max 300
已有 Kubernetes 集群,并安装了 kubectl 或 Helm
Pod 可以被调度到集群的 master 节点(可临时取消 master 节点的 Taint)
kubectl get node -o wide
kubectl describe node master | grep Taint
kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-
kubectl create ns openebs
换镜像
helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm install --namespace openebs --name openebs stable/openebs --version 1.5.0
kubectl get sc
kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl taint nodes master node-role.kubernetes.io/master=:NoSchedule
文档:
https://v2-1.docs.kubesphere.io/docs/zh-CN/installation/install-on-k8s/
不用官网的安装方式,连接已经失效了
linux中新建
vim kubesphere-minimal.yaml
将下面的复制进来,然后执行 kubectl apply -f kubesphere-minimal.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: kubesphere-system
---
apiVersion: v1
data:
ks-config.yaml: |
---
persistence:
storageClass: ""
etcd:
monitoring: False
endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9
port: 2379
tlsEnable: True
common:
mysqlVolumeSize: 20Gi
minioVolumeSize: 20Gi
etcdVolumeSize: 20Gi
openldapVolumeSize: 2Gi
redisVolumSize: 2Gi
metrics_server:
enabled: False
console:
enableMultiLogin: False # enable/disable multi login
port: 30880
monitoring:
prometheusReplicas: 1
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
grafana:
enabled: False
logging:
enabled: False
elasticsearchMasterReplicas: 1
elasticsearchDataReplicas: 1
logsidecarReplicas: 2
elasticsearchMasterVolumeSize: 4Gi
elasticsearchDataVolumeSize: 20Gi
logMaxAge: 7
elkPrefix: logstash
containersLogMountedPath: ""
kibana:
enabled: False
openpitrix:
enabled: False
devops:
enabled: False
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
sonarqube:
enabled: False
postgresqlVolumeSize: 8Gi
servicemesh:
enabled: False
notification:
enabled: False
alerting:
enabled: False
kind: ConfigMap
metadata:
name: ks-installer
namespace: kubesphere-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ks-installer
namespace: kubesphere-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: ks-installer
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apps
resources:
- '*'
verbs:
- '*'
- apiGroups:
- extensions
resources:
- '*'
verbs:
- '*'
- apiGroups:
- batch
resources:
- '*'
verbs:
- '*'
- apiGroups:
- rbac.authorization.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apiregistration.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apiextensions.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- tenant.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- certificates.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- devops.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- monitoring.coreos.com
resources:
- '*'
verbs:
- '*'
- apiGroups:
- logging.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- jaegertracing.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- storage.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- admissionregistration.k8s.io
resources:
- '*'
verbs:
- '*'
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ks-installer
subjects:
- kind: ServiceAccount
name: ks-installer
namespace: kubesphere-system
roleRef:
kind: ClusterRole
name: ks-installer
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
app: ks-install
spec:
replicas: 1
selector:
matchLabels:
app: ks-install
template:
metadata:
labels:
app: ks-install
spec:
serviceAccountName: ks-installer
containers:
- name: installer
image: kubesphere/ks-installer:v2.1.1
imagePullPolicy: "Always"
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
确认 Pod 都正常运行后,可使用 IP:30880访问 KubeSphere UI 界面,默认的集群管理员账号为 admin/P@88w0rd。