《kubernetes 1.8.0 测试环境安装部署》
时间:2017-12-04
之前写了一篇关于vSphere Cloud Provider
的bolg,本文中实现vSphere Volume
前提是vSphere Cloud Provider
的支持,关于vSphere Cloud Provider
–> 《kubernetes-1.8.0》15-addon-vSphere Cloud Provider
本文是关于vSphere Cloud Provider
的应用例子,分别用pod volume直接挂载、通过pv pvc使用、以及通过Storage Class
使用vSphere Volume;
1、ssh到ESX上,在ESX上创建VMDK:
[root@localhost:~] vmkfstools -c 2G /vmfs/volumes/local_datastore_47/k8s-volume/myDisk.vmdk
/vmfs/volumes/local_datastore_47
:为数据存储路径2、创建pod并用上myDisk.vmdk:
vsphere-volume-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-vmdk
spec:
containers:
- image: gcr.mirrors.ustc.edu.cn/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-vmdk
name: test-volume
volumes:
- name: test-volume
# This VMDK volume must already exist.
vsphereVolume:
volumePath: "[local_datastore_47] k8s-volume/myDisk"
fsType: ext4
volumePath
: [local_datastore_47]
为数据存储名,k8s-volume/myDisk为 目录/vmdk3、创建pod
$ kubectl create -f vsphere-volume-pod.yaml
4、验证是否成功挂载:
[root@node-131 vsphere-volume]# kubectl get pods
NAME READY STATUS RESTARTS AGE
...
test-vmdk 1/1 Running 0 2h
...
[root@node-131 vsphere-volume]# kubectl describe pods test-vmdk
Name: test-vmdk
Namespace: default
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17s default-scheduler Successfully assigned test-vmdk to node.131
Normal SuccessfulMountVolume 17s kubelet, node.131 MountVolume.SetUp succeeded for volume "default-token-t8j7k"
Normal SuccessfulMountVolume 16s kubelet, node.131 MountVolume.SetUp succeeded for volume "test-volume"
Normal Pulling 15s kubelet, node.131 pulling image "gcr.mirrors.ustc.edu.cn/google_containers/test-webserver"
Normal Pulled 10s kubelet, node.131 Successfully pulled image "gcr.mirrors.ustc.edu.cn/google_containers/test-webserver"
Normal Created 10s kubelet, node.131 Created container
Normal Started 10s kubelet, node.131 Started container
利用之前创建的vmdk
:/vmfs/volumes/local_datastore_47/k8s-volume/myDisk.vmdk
创建PV:
vsphere-volume-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
vsphereVolume:
volumePath: "[local_datastore_47] k8s-volume/myDisk"
fsType: ext4
volumePath
: [local_datastore_47]
为数据存储名,k8s-volume/myDisk为 目录/vmdkpersistentVolumeReclaimPolicy: Retain
:PV回收方式为保留$ kubectl create -f vsphere-volume-pv.yaml
创建PVC:
vsphere-volume-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
$ kubectl create -f vsphere-volume-pvc.yaml
查看pv及pvc状态
[root@node-131 vsphere-volume]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0001 2Gi RWO Retain Bound default/pvc0001 3h
p
...
STATUS
:为Bound说明已经有pvc绑上去了[root@node-131 vsphere-volume]# kubectl describe pv pv0001
Name: pv0001
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"pv0001","namespace":""},"spec":{"accessModes":["ReadWriteOnce"],"capa...
pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/pvc0001
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: StoragePolicyName: %v
FSType: [local_datastore_47] k8s-volume/myDisk
%!(EXTRA string=ext4, string=)Events:
[root@node-131 vsphere-volume]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
...
pvc0001 Bound pv0001 2Gi RWO 3h
...
[root@node-131 vsphere-volume]# kubectl describe pvc pvc0001
Name: pvc0001
Namespace: default
StorageClass:
Status: Bound
Volume: pv0001
Labels:
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Capacity: 2Gi
Access Modes: RWO
Events:
创建pod应用该pvc:
vsphere-volume-pvcpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod
spec:
containers:
- name: test-container
image: gcr.mirrors.ustc.edu.cn/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /tmp
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvc0001
$ kubectl create -f vsphere-volume-pvcpod.yaml
查看pod是否running:
[root@node-131 vsphere-volume]# kubectl get pod
NAME READY STATUS RESTARTS AGE
...
pvpod 1/1 Running 0 3h
...
这里不用创建pv
,通过Storage Class
的方式动态分配pv。
创建Create Storage Class:
vsphere-volume-sc-fast.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: local_datastore_47
diskformat
: 本例中对应类型为zeroedthick
,应该还可以选择 thinprovision
或者 eagerzeroedthick
;name
: fast:起个贴切的名字,厚延迟归零的vmdk相对较快;datastore
:制定使用哪个数据存储;provisioner
:指定了pv由谁提供$ kubectl create -f vsphere-volume-sc-fast.yaml
查看:
[root@node-131 vsphere-volume]# kubectl get sc
NAME PROVISIONER
fast kubernetes.io/vsphere-volume
[root@node-131 vsphere-volume]# kubectl describe storageclass fast
Name: fast
IsDefaultClass: No
Annotations:
Provisioner: kubernetes.io/vsphere-volume
Parameters: datastore=local_datastore_47,diskformat=zeroedthick
Events:
创建pvc
vsphere-volume-pvcsc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcsc001
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volume.beta.kubernetes.io/storage-class: fast
:往哪个storageclass申请storage: 2Gi
:要2G空间$ kubectl create -f vsphere-volume-pvcsc.yaml
查看pvc是否创建:
[root@node-131 vsphere-volume]# kubectl describe pvc pvcsc001
Name: pvcsc001
Namespace: default
StorageClass: fast
Status: Bound
Volume: pvc-9bf5496d-dbf7-11e7-a5e6-005056bc80ed
Labels:
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-class=fast
volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/vsphere-volume
Capacity: 2Gi
Access Modes: RWO
Events:
查看该创建的动态创建的pv:
[root@node-131 vsphere-volume]# kubectl describe pv pvc-9bf5496d-dbf7-11e7-a5e6-005056bc80ed
Name: pvc-9bf5496d-dbf7-11e7-a5e6-005056bc80ed
Labels:
Annotations: kubernetes.io/createdby=vsphere-volume-dynamic-provisioner
pv.kubernetes.io/bound-by-controller=yes
pv.kubernetes.io/provisioned-by=kubernetes.io/vsphere-volume
StorageClass: fast
Status: Bound
Claim: default/pvcsc001
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: StoragePolicyName: %v
FSType: [local_datastore_47] kubevols/kubernetes-dynamic-pvc-9bf5496d-dbf7-11e7-a5e6-005056bc80ed.vmdk
%!(EXTRA string=ext4, string=)Events:
查看vcenter上信息:
创建pod应用该pvc:
vsphere-volume-pvcscpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod-sc
spec:
containers:
- name: test-container
image: gcr.mirrors.ustc.edu.cn/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test-vmdk
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvcsc001
$ kubectl create -f vsphere-volume-pvcscpod.yaml
查看pod启动情况:
[root@node-131 vsphere-volume]# kubectl get pods
NAME READY STATUS RESTARTS AGE
...
pvpod-sc 1/1 Running 0 16s
...
[root@node-131 vsphere-volume]# kubectl describe pods pvpod-sc
Name: pvpod-sc
Namespace: default
...
Volumes:
test-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvcsc001
ReadOnly: false
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
...
Normal SuccessfulMountVolume 27s kubelet, node.131 MountVolume.SetUp succeeded for volume "default-token-t8j7k"
Normal SuccessfulMountVolume 19s kubelet, node.131 MountVolume.SetUp succeeded for volume "pvc-9bf5496d-dbf7-11e7-a5e6-005056bc80ed"
...
至此vsphere volume基本使用完成后续Storage Policy Management部分等下次更新,至于vsan。。没条件就不做了:
本系列其他内容:
01-环境准备
02-etcd群集搭建
03-kubectl管理工具
04-master搭建
05-node节点搭建
06-addon-calico
07-addon-kubedns
08-addon-dashboard
09-addon-kube-prometheus
10-addon-EFK
11-addon-Harbor
12-addon-ingress-nginx
13-addon-traefik
参考资料:
https://github.com/kubernetes/examples/blob/master/staging/volumes/vsphere/README.md