Deployment脚本是一种用于自动化应用程序部署过程的脚本。它包含一系列命令和配置,用于将应用程序从开发环境部署到生产环境或其他目标环境。
部署相对于Kubernetes向Node节点发送指令,创建容器的过程,Kubernetes支持yml格式的部署脚本
与部署相关常用命令:
kubectl create -f 部署yml文件
kubectl apply -f 部署yml文件
kubectl get pod [-o wide]
,-o wide表示详细信息kubectl describe pod pod名称
kubectl logs [-f] pod名称
,-f表示是否实时更新kubectl delete service
master执行即可
设置部署文件,vi tomcat-deploy.yml
,内容如下:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tomcat-deploy
spec:
replicas: 2
template:
metadata:
labels:
app: tomcat-cluster
spec:
containers:
- name: tomcat-cluster
image: tomcat:latest
ports:
- containerPort: 8080
创建部署:kubectl create -f tomcat-deploy.yml
[root@master k8s]# kubectl create -f tomcat-deploy.yml
deployment.extensions/tomcat-deploy created
看看是否配置正确:kubectl get deployment
[root@master k8s]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
tomcat-deploy 2/2 2 2 6m34s
设置服务文件,vi tomcat-service.yml
,内容如下:
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
labels:
app: tomcat-service
spec:
type: NodePort
selector:
app: tomcat-cluster
ports:
- port: 8000
targetPort: 8080
nodePort: 32500
参数说明:spec.selector.app就是我们之前部署的集群标签名
创建部署:kubectl create -f tomcat-service.yml
[root@master k8s]# kubectl create -f tomcat-service.yml
service/tomcat-service created
看看是否配置正确:kubectl get service
[root@master forlan-test]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 173m
tomcat-service ClusterIP 10.110.88.125 8000/TCP 159m
访问验证:此时,我们就可以通过node节点的ip+暴露的nodePort进行访问,http://192.168.56.201:32500 或 http://192.168.56.202:32500,打开界面,第一次比较慢,会出现404表示成功
上面主要通过节点和暴露的端口访问,无法进行负载均衡,没有利用起集群这个利器,我们通过Rinted,我们直接访问master节点即可,还可以实现负载均衡,下面就来操作实现下
删除之前部署的服务
[root@master k8s]# kubectl delete service tomcat-service
service "tomcat-service" deleted
编辑服务文件,vi tomcat-service.yml
,调整内容,主要注释掉type: NodePort
和nodePort: 32500
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
labels:
app: tomcat-service
spec:
# type: NodePort
selector:
app: tomcat-cluster
ports:
- port: 8000
targetPort: 8080
# nodePort: 32500
重新创建服务:kubectl create -f tomcat-service.yml
[root@master k8s]# kubectl create -f tomcat-service.yml
service/tomcat-service created
看看是否配置正确:kubectl get service
[root@master forlan-test]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 173m
tomcat-service ClusterIP 10.110.88.125 8000/TCP 159m
查看详细的服务信息:kubectl describe service tomcat-service
[root@master forlan-test]# kubectl describe service tomcat-service
Name: tomcat-service
Namespace: default
Labels: app=tomcat-service
Annotations:
Selector: app=tomcat-cluster
Type: ClusterIP
IP: 10.110.88.125
Port: 8000/TCP
TargetPort: 8080/TCP
Endpoints: 10.244.1.10:8080,10.244.2.9:8080
Session Affinity: None
Events:
验证一下,正常情况下,在master发生请求,应该返回404才对,但是这里很奇怪,在node节点发送请求,返回404,待解决
[root@node2 forlan-web]# curl 10.110.88.125:8000
<!doctype html>"en">HTTP Status 404 – Not Found</title> HTTP Status 404 – Not Found
Type Status Report
Description The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.
Apache Tomcat/10.0.14
[root@node2 forlan-web]#
上面只是实现在节点内访问,想在外部访问,往下操作
下载rinetd工具包,解压
[root@master k8s]# wget http://www.boutell.com/rinetd/http/rinetd.tar.gz --no-check-certificate
[root@master k8s]# tar -xzvf rinetd.tar.gz
进入rinetd目录,修改 rinetd.c 文件,将文件中的所有 65536 替换为 65535
cd rinetd
sed -i 's/65536/65535/g' rinetd.c
注:"s"表示替换操作,"65536"是要被替换的内容,"65535"是替换后的内容,"g"表示全局替换
创建rinetd要求的目录/usr/man,安装gcc编译器,编译并安装程序
mkdir -p /usr/man
yum install -y gcc
make && make install
进行端口映射
编辑配置文件:vi /etc/rinetd.conf
,0.0.0.0表示所有ip都转发
0.0.0.0 8000 10.110.88.125 8000
让配置生效
rinetd -c /etc/rinetd.conf
查看是否有监听8000端口:netstat -tulpn|grep 8000
[root@master rinetd]# netstat -tulpn|grep 8000
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 25126/rinetd
访问masterIp:8000验证,出现404表示成功
在node节点查看当前运行的容器
[root@node1 /]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e704f7fd1af2 tomcat "catalina.sh run" 54 minutes ago Up 54 minutes k8s_tomcat-cluster_tomcat-deploy-5fd4fc7ddb-d2cx8_default_b62cd02c-8a8d-11ee-bab1-5254004d77d3_0
进入容器内部:docker exec -it
[root@node1 /]# docker exec -it e704f7fd1af2 /bin/bash
root@tomcat-deploy-5fd4fc7ddb-d2cx8:/usr/local/tomcat# ls
BUILDING.txt CONTRIBUTING.md LICENSE NOTICE README.md RELEASE-NOTES RUNNING.txt bin conf lib logs native-jni-lib temp webapps webapps.dist work
在webapps下新建我们的jsp文件,vi index.jsp,发现命令不支持,也安装不了相关的命令
root@tomcat-deploy-5fd4fc7ddb-d2cx8:/usr/local/tomcat/webapps# vi index.jsp
bash: vi: command not found
root@tomcat-deploy-5fd4fc7ddb-d2cx8:/usr/local/tomcat/webapps# yum install vi
bash: yum: command not found
上面的情况,目前有2种解决方案:
主要是把宿主机的/forlan-web目录挂载到容器内的/usr/local/tomcat/webapps目录,调整我们之前定义的部署文件:vi tomcat-deploy.yml
,调整内容如下:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tomcat-deploy
spec:
replicas: 2
template:
metadata:
labels:
app: tomcat-cluster
spec:
volumes:
- name: web-app
hostPath:
path: /forlan-web
containers:
- name: tomcat-cluster
image: tomcat:latest
ports:
- containerPort: 8080
volumeMounts:
- name: web-app
mountPath: /usr/local/tomcat/webapps
更新集群配置:kubectl apply -f tomcat-deploy.yml
[root@master k8s]# kubectl apply -f tomcat-deploy.yml
deployment.extensions/tomcat-deploy configured
测试在宿主机/forlan-web/forlan-test下新增index.jsp文件,进入容器查看有同步即可,至此,就说明挂载成功了
[root@node1 mnt]# vi /forlan-web/forlan-test/index.jsp
<%=request.getLocalAddr()%>
[root@node1 mnt]# docker ps|grep tomcat-cluster
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b064bbacaf00 tomcat "catalina.sh run" 2 minutes ago Up 2 minutes k8s_tomcat-cluster_tomcat-deploy-6678dccdc9-gjd9r_default_b5c19b2c-8aa4-11ee-bab1-5254004d77d3_0
[root@node1 mnt]# docker exec -it b064bbacaf00 /bin/bash
root@tomcat-deploy-6678dccdc9-gjd9r:/usr/local/tomcat# cd webapps
root@tomcat-deploy-6678dccdc9-gjd9r:/usr/local/tomcat/webapps# cat forlan-test/index.jsp
<%=request.getLocalAddr()%>
验证一下,访问masterIp:8000验证,出现10.244.2.9 或 10.244.1.10,就说明请求到了node1 或 node2
前面的index.jsp,是单独维护在node节点的机器,如果多个node节点存在相同文件,可以怎么处理?那就可以使用NFS来实现集群文件共享了
Network File System - NFS
NFS,是由SUN公司研制的文件传输协议
NFS主要是采用远程过程调用RPC机制实现文件传输
NFS允许一台服务器在网络上共享文件和资源。/etc/exports 文件是 NFS 的主要配置文件,用于定义哪些目录或文件系统可以被哪些客户端共享
安装组件
yum install -y nfs-utils rpcbind
创建共享目录
cd /usr/local
mkdir forlan-data
编辑暴露文件:vi /etc/exports
/usr/local/forlan-data 192.168.56.200/24(rw,sync)
参数说明:
启动服务并设置开机自动启动
systemctl start nfs.service
systemctl start rpcbind.service
systemctl enable nfs.service
systemctl enable rpcbind.service
验证是否配置成功:exportfs
,看到配置说明成功
[root@master local]# exportfs
/usr/local/forlan-data
192.168.56.200/24
后续暴露文件有变动,执行exportfs -ra
重新加载NFS服务
安装组件
yum install -y nfs-utils rpcbind
验证是否暴露了:showmount -e 192.168.56.200
[root@node1 /]# showmount -e 192.168.56.200
Export list for 192.168.56.200:
/usr/local/forlan-data 192.168.56.200/24
进行挂载:
mount 192.168.56.200:/usr/local/forlan-data /forlan-web
master的共享目录增加文件index.jsp,node目录看得到,说明成功,如下:
[root@master forlan-data]# ll
total 0
[root@master forlan-data]# vi /usr/local/forlan-data/index.jsp
[root@master forlan-data]# ls
index.jsp
[root@node1 forlan-web]# ls
index.jsp
[root@node2 forlan-web]# ls
index.jsp
原来部署了2个,现在需要部署3个Tomcat?你起码得满足什么配置才能操作?
编辑部署文件:vi /k8s/tomcat-deploy.yml,更改replicas: 3
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tomcat-deploy
spec:
replicas: 3
template:
metadata:
labels:
app: tomcat-cluster
spec:
volumes:
- name: web-app
hostPath:
path: /mnt
containers:
- name: tomcat-cluster
image: tomcat:latest
resources:
requests:
cpu: 1
memory: 500Mi
limits:
cpu: 2
memory: 1024Mi
ports:
- containerPort: 8080
volumeMounts:
- name: web-app
mountPath: /usr/local/tomcat/webapps
更新集群配置:kubectl apply -f tomcat-deploy.yml
[root@master k8s]# kubectl apply -f tomcat-deploy.yml
deployment.extensions/tomcat-deploy configured
查看部署:kubectl get deployment,可以看到从2变为3台了
[root@master k8s]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
tomcat-deploy 3/3 3 3 100m
注:k8s默认是在负载低的节点新增pod
解决:指令后面加上wget http://www.boutell.com/rinetd/http/rinetd.tar.gz --no-check-certificate
问题明细:
W1124 14:24:05.207176 1 proxier.go:493] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1124 14:24:05.208150 1 proxier.go:493] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1124 14:24:05.208974 1 proxier.go:493] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1124 14:24:05.209778 1 proxier.go:493] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1124 14:24:05.214337 1 server_others.go:267] Flag proxy-mode=“” unknown, assuming iptables proxy
解决:
cat > /etc/sysconfig/modules/ipvs.modules <-- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
mode:"ipvs"
kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'