首先来一段官网的介绍:Kubeflow
项目致力于使Kubernetes
上机器学习(ML)工作流的部署变得简单、可移植和可扩展。我们的目标不是重新创建其他服务,而是提供一种直接的方式,为ML部署最佳的开源系统到不同的基础设施。无论你在哪里运行Kubernetes,你都应该能够运行Kubeflow。(谷歌翻译)
kubeflow
可以运行在多种云平台基础设施上(如:GKE)。本文的kubeflow
安装在自建的k8s平台上。
基础环境如下:
`官方文档前提条件:
注意:kubeflow
从1.3版本开始,安装方式出现了变化,具体变化请看manifests
项目的 readme
具体搭建方法请百度。
kubeflow的组件需要存储,所以需要提前准备好pv,本次实验存储采用的本地磁盘存储的方式。流程如下:
mkdir -p /data/istio-authservice /data/katib-mysql /data/minio /data/mysql-pv-claim
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: authservice
namespace: istio-system
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/istio-authservice"
---
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: kubeflow
name: katib-mysql
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/katib-mysql"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: minio
namespace: kubeflow
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/minio"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-claim
namespace: kubeflow
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/mysql-pv-claim"
kubectl apply -f kubeflow-storage.yaml
manifest
项目git clone https://github.com/kubeflow/manifests.git
修改yaml,下面每个文件里面添加 storageClassName: local-storage
安装kubeflow
cd /path/to/manifests
while ! kustomize build example | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done
Q1: authservice-0 pod 启动失败:Error opening bolt store: open /var/lib/authservice/data.db: permission denied
A: 通过翻看manifests
项目的authservice StatefulSet
定义文件 common/oidc-authservice/overlays/ibm-storage-config/statefulset.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: authservice
spec:
template:
spec:
initContainers:
- name: fix-permission
image: busybox
command: ['sh', '-c']
args: ['chmod -R 777 /var/lib/authservice;']
volumeMounts:
- mountPath: /var/lib/authservice
name: data
authservice服务有个initContainers来解决权限问题,并且赋予777的最大权限,考虑到我们采用的是本地的存储,所以给挂载的磁盘目录赋予最大权限即可:chmod -R 777 /data/istio-authservice
Q2: 进入dashboard后,页面报:upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
A: 两个解决方案:
方案1:安装后执行如下命令并将 ISTIO_MUTUAL 改成 DISABLE.
kubectl edit destinationrule -n kubeflow ml-pipeline
kubectl edit destinationrule -n kubeflow ml-pipeline-ui
方案2:安装前修改 apps/kfp-tekton/upstream/base/installs/multi-user/istio-authorization-config.yaml
文件,将 ISTIO_MUTUAL 改成 DISABLE.
Q3: 创建一个训练任务后,pod启动报错:MountVolume.SetUp failed for volume “docker-sock” : hostPath type check failed: /var/run/docker.sock is not a socket file
A: 这是因为我们用的是k8s版本是1.21,默认的容器运行时是containerd,和 argo 不兼容,我们需要修改容器运行时:kubectl edit configmap workflow-controller-configmap -n kubeflow
.将 containerRuntimeExecutor
值从 docker 改成 emissary(其余的选项都有各自的问题). 然后删除argo启动的容器重新启动即可。argo Workflow Executors的区别见链接: Workflow Executors
本文详细介绍了采用单命令方式安装kubeflow最新1.4.0版本,搭建的过程中碰到了不少的问题,在此记录下来。但同时也遗留了一些问题:
搭建的过程中感觉kubeflow越来越稳定和架构简单了,下一步探索下里面的training部分。