以一个部署在k8s上的小项目做例子,介绍k8s项目部署的过程
需要有k8s的基础,搭建k8s集群可用使用二进制文件,官方推荐使用容器部署的方式,这里使用的kubeadm快捷部署
使用kubeadm 1.13搭建集群(各个版本的搭建可能存在一些小的差异)
5台 centos7,最低版本
主机名分别是server1,server2,server3,server4,server5
1:系统更新: yum update -y
2:修改系统主机名 vim /etc/hostname
3:修改selinux:vim /etc/selinux/config,修改SELINUX=disabled,更改为disabled
4:关闭防火墙(非必要,可以自己配置端口):systemctl disable firewalld
5:修改主机名列表(非必要,可以搭建内部DNS):vim /etc/hosts,把server1到server5的地址添加上去
6:重启服务器:shutdown -r now
7:服务器可以配ssh免密登录(非必要)
8:安装docker:yum install docker -y
8.1:开启自启动:systemctl enable docker
8.2:修改docker的代理,不然会拉取不到k8s的镜像
vim /etc/systemd/system/multi-user.target.wants/docker.service
添加一行:
Environment=HTTP_PROXY=http://10.99.32.2:1080
8.3:刷新系统配置:systemctl daemon-reload,重启docker
9:安装kubeadm:
9.1:添加kubernetes的repo:vim /etc/yum.repos.d/kubernetes.repo,添加一下内容:
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
9.2:配置yum的外网代理:需要使用代理,否则下载不了
vim /etc/yum.conf添加或者修改一行:
proxy=http://yourhost:yourport
9.3:安装:yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
9.4:开启自启动:systemctl enable kubelet && systemctl start kubelet
10:配置kubeadm相关:
10.1:在master节点上配置iptable(非必要,可能已经设置好了的)
vim /etc/sysctl.d/k8s.conf,添加两行内容
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system
10.2:关闭所有swap:swapoff -a
11:启动docker,初始化kubeadm,有时需要修改cgroup,这里不需要
systemctl start docker
kubeadm init,出现kubeadm join即是初始化成功
如果使用calico网络插件,需要指定一个分配的ip域:
kubeadm init --pod-network-cidr=192.168.0.0/16
12:配置管理端,根据提示复制一个config文件即可使用kubectl操作,操作成功后执行
kubectl get nodes,能看到信息即是正常,看不到提示The connection to the server localhost:8080 was refused - did you specify the right host or port?,则是没有配置config文件
13:安装slave节点:和安装master节点的步骤一样,1.13版的slave不需要下载一些镜像了,直接安装完了,swapoff -a,然后kubeadm join 10.99.32.3:6443 --token euoczm.lhfb8w6ngx98aj3z --discovery-token-ca-cert-hash sha256:d094ed1b6769f25247e6b1586541f7dbee59272cddb93bb35e054472e40984e4
14:全部加入完毕后在master机器上使用kubectl get nodes即可看到所有机器,但是是NotReady状态。
15:安装网络:使用的是calico网络插件:
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f rbac-kdd.yaml
kubectl apply -f calico.yaml
16:查看kubectl get nodes 看到所有节点都是ready状态即可
第二部分:搭建pv
1:使用NFS搭建PVC存储卷(可以搭建动态pvc)
1.1:搭建NFS:在各个节点安装控件:yum -y install nfs-utils rpcbind
1.2:在master上创建一个共享目录:mkdir /nfsdisk
1.3:配置NFS服务器:vim /etc/exports,添加以下内容:
/nfsdisk 10.99.32.3(rw,sync,fsid=0,no_root_squash) 10.99.32.10(rw,sync,fsid=0,no_root_squash) 10.99.32.12(rw,sync,fsid=0,no_root_squash) 10.99.32.31(rw,sync,fsid=0,no_root_squash) 10.99.32.32(rw,sync,fsid=0,no_root_squash)
ip地址为需要读写这个目录的客户端的ip地址
1.4:开启nfs服务自启动和运行:systemctl enable nfs && systemctl start nfs
1.5:刷新共享:exportfs -rv,如果看到exporting 10.99.32.3:/nfsdisk即为配置正确
1.6:需要在各个客户端启动nfs才可以,否则配置pv会出错
systemctl enable nfs && systemctl start nfs
1.7:配置PV和pvc,可以一个pvc被多个部署使用
创建一个pc.yaml文件,添加以下内容:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 150Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 10.99.32.3
path: /nfsdisk
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 150Gi
volumeName: nfs-pv
1.8:创建pv和pvc:kubectl create -f pv.yaml
1.9:查看是否创建成功:kubectl get pv || kubectl get pvc
第三部分:搭建各个应用服务器,主要分为两块内容:
1:挂载volume(如果需要单独保存一些数据的,或者是数据库,redis等需要单独存储数据的应用服务器)
2:service配置:需要提供对外访问和对内访问的端口映射,这里配置简单的nodeport映射,复杂高级点的使用sitemesh方式
3:配置完部署的yaml文件后,直接使用kubectl create -f xxx.yaml即可
mysql服务器:
apiVersion: v1
kind: Service
metadata:
name: mysql-cs
labels:
app: mysql
spec:
type: NodePort
ports:
- name: mysql
port: 3306
nodePort: 31718
selector:
app: mysql
---
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7.20
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "0"
- name: MYSQL_ROOT_PASSWORD
value: "123456"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: config
mountPath: /etc/mysql/conf.d/
resources:
requests:
cpu: 800m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
livenessProbe:
exec:
command: ["mysqladmin","-uroot","-pAwd123456789","ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
command: ["mysql","-h","127.0.0.1","-uroot","-pAwd123456789","-e","SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 2
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-pvc
- name: config
configMap:
name: mysql
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
data:
my.cnf: |
[mysqld]
sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
rabbitmq服务器:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3.7.2-management-alpine
env:
- name: RABBITMQ_DEFAULT_USER
value: root
- name: RABBITMQ_DEFAULT_PASS
value: awd123456789
- name: RABBITMQ_DEFAULT_VHOST
value: /
ports:
- name: rabbitmq
containerPort: 5672
- name: management
containerPort: 15672
volumeMounts:
- name: data
mountPath: /var/lib/rabbitmq
subPath: rabbitmq
resources:
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 800m
memory: 1024Mi
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-manager
spec:
type: NodePort
ports:
- name: management
port: 15672
nodePort: 31717
selector:
app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
app: rabbitmq
spec:
ports:
- port: 5672
name: rabbitmq
selector:
app: rabbitmq
redis服务器:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:4.0.6-alpine
ports:
- name: redis
containerPort: 6379
volumeMounts:
- name: data
mountPath: /data
subPath: redis
resources:
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 800m
memory: 1024Mi
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: NodePort
ports:
- name: redis
port: 6379
nodePort: 31715
selector:
app: redis
---
apiVersion: v1
kind: Service
metadata:
name: redis-cs
labels:
app: redis
spec:
ports:
- port: 3306
name: redis
selector:
app: redis
第四部分构建自己的应用:
1:先讲自己的应用打包成镜像,然后推送到公有仓库,或者私有仓库
2:配置yaml文件,镜像使用自己打包的镜像