当我们对Java应用完成Maven/Gradle打包并将镜像推送至远程仓库后,剩下的工作就是应用上K8S了,涉及到的工作主要为:
通过对Java应用运行依赖的JVM参数、运行目录等内容的分析,需要分别通过K8S内置环境变量、Configmap、PV/PVC等功能进行不同程度的集成。
一套统一标准的JVM参数便于运维团队对Java进程的统一管理,例如统一的内存参数、日志目录、gc日志等等。我们将统一的JVM参数定义如下:
-server
-Xms2048m
-Xmx2048m
-XX:MaxPermSize:256m
-Dapp.name=test
-Denv=prod
-Djava.io.tmpdir=/tmp
# gc.log输出到统一的日志目录
-Xloggc:/data/logs/gc.log
-XX:+PrintGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintGCDateStamps
-XX:+PrintHeapAtGC
-XX:+PrintReferenceGC
-Dsun.jnu.encoding=UTF-8
-XX:+HeapDumpOnOutOfMemoryError
# dump文件输出到统一的日志目录
-XX:HeapDumpPath=/data/logs/HeapDumpOnOutOfMemoryError.dump
其中:
# vim Dockerfile
FROM harbor.xxx.com/public/centos-jdk8:1.0.0
MAINTAINER yunwei
#VOLUME /tmp
ARG JAR_FILE
ARG APP_NAME
ENV APP_NAME=${APP_NAME}
COPY ${JAR_FILE} ${APP_NAME}.jar
ENTRYPOINT ["/bin/sh","-c","java -Dapp.name=${APP_NAME} -Denv=${PROFILE} ${XM} -Xbootclasspath/a:/data/config/${APP_NAME} $JVM_OPTS -jar /${APP_NAME}.jar"]
其中:
我们通过Deployment中的环境变量和configmap来进一步集成环境运行参数:
# vim configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: jvmoptions
namespace: test
data:
JVM_OPTS: "-server -XX:MaxPermSize:256m -Djava.io.tmpdir=/tmp -Xloggc:/data/logs/gc.log -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -XX:+PrintReferenceGC -Dsun.jnu.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs/HeapDumpOnOutOfMemoryError.dump"
# vim Deployment.yaml
apiVersion: apps/v1
kind: Deployment
...省略...
spec:
containers:
- name: sysmonitor
env:
- name: PROFILE
value: "test"
- name: XM
value: "-Xms2048m -Xmx2048m"
- name: JVM_OPTS
valueFrom:
configMapKeyRef:
name: jvmoptions
key: JVM_OPTS
...省略...
通过JVM参数可以发现,应用运行依赖的运行目录主要有:
以上目录我们一定要持久化,以便应用崩溃时能够进行排查或后续日志收集,K8S的解决方案是通过PV/PVC的方式实现持久卷的管理。
静态制备
), 或者使用存储类(Storage Class)来动态制备
。虽然静态制备和动态制备都能实现我们对存储资源的使用,但是从目前对目录使用情况的分析,静态制备是最适合我们的使用方案。而动态制备更适用于多目录及不同等级存储资源的需求及动态分配,对我们目前的使用量不太适用。
对于存储的适用,我们统一基于NFS的文件存储:
# vim static-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-static
labels:
type: pv-static
spec:
storageClassName: nfs
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- vers=3
- async
- rsize=1048576
- wsize=1048576
nfs:
path: /data
server: 10.10.20.250
capacity:
storage: 10Gi
# vim static-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-static
namespace: test
labels:
type: pvc-static
spec:
storageClassName: nfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
# Deployment挂载
# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
...省略...
template:
spec:
containers:
- name: test
env:
- name: PROFILE
value: "test"
- name: XM
value: "-Xms2048m -Xmx2048m"
- name: JVM_OPTS
valueFrom:
configMapKeyRef:
name: jvmoptions
key: JVM_OPTS
...省略...
ports:
- containerPort: 8090
volumeMounts:
- name: data
mountPath: /data
...省略...
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-static
我们在Nas上创建一个 data (10G)共享目录,可方便对配置文件进行统一管理:
完整的一套应用上K8S的部署如下:
# vim helloworld.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
name: helloworld
labels:
app: helloworld
spec:
containers:
- name: helloworld
env:
- name: PROFILE
value: "p2"
- name: XM
value: "-Xms2048m -Xmx2048m"
- name: JVM_OPTS
valueFrom:
configMapKeyRef:
name: jvmoptions
key: JVM_OPTS
image: harbor.xxx.com:8000/helloworld:1.1.21
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /app/health
port: 8090
initialDelaySeconds: 60
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /app/health
port: 8090
initialDelaySeconds: 60
timeoutSeconds: 5
ports:
- containerPort: 8090
volumeMounts:
- name: data
mountPath: /data
imagePullSecrets:
- name: harbor-secret
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-static
---
apiVersion: v1
kind: Service
metadata:
name: helloworld
namespace: test
spec:
type: NodePort
selector:
app: helloworld
ports:
- port: 8090
targetPort: 8090
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: helloworld
namespace: test
spec:
rules:
- host: helloworld.xxx.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: sysmonitor
port:
number: 8090
应用上K8S后并不意味着结束,相反我们仍还有其他需要工作要做:
其实本文应用的日志最终写在NFS共享存储上,还是将其持久化到集群node节点上。尤其是在上百已经级别的情况下,毕竟NFS的性能也是我们不得不面对的一个问题。其最终的解决方案,需要我们去从容选择。