k8s 分布式集群docker容器管理

我这里准备了三台虚拟机:master(192.168.33.52)、node1(192.168.33.35)和node2(192.168.33.50)

kubernetes 集群部署有两种方式:

第一种方式:较难,配置的信息较多(不建议)

第二种方式:使用kubeadm 来简化部署


kubeadm安装步骤:

1.master,nodes:安装kubelet、kubeadm、docker、kunbectl(命令行客户端)

2.master(主节点) :kubeadm init

3. nodes(子节点): kubeadm join

接下来就是安装了,安装之前,一定要关闭iptables,Firewalls服务,应为k8s会大量用到这些。


第一步:安装

先去阿里云镜像网站下载 

进入到centos系统 cd /etc/yum.repos.d 下 创建文件 vim Kubernetes.repo 在里面添加如下内容

name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

最后执行yum repolist ,检查是否可以安装程序。
k8s 分布式集群docker容器管理_第1张图片

没问题就把文件复制到其他node节点机器上面对应的路径下。

master主机安装:

yum install kubeadm kubectl kubelet docker 安装这些软件,注意安装过程中,可能出现错误,如下图:

k8s 分布式集群docker容器管理_第2张图片

出现这种情况,先把Kubernetes.repo这个公钥改为gpgcheck=0 不让检查了,直接下载wget 这两个文件

https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

通过手动导入 rpm --import yum-key.gpg  和  rpm --import yum-key.gpg  ,最后再次安装上面的软件就可以了。

以上所有的准备的软件已经安装完毕!(在master主机上面安装的)

接下来启动 kubeadm服务,出现错误如下:

k8s 分布式集群docker容器管理_第3张图片

需要在kubelet中的配置中添加参数:

k8s 分布式集群docker容器管理_第4张图片

KUBELET_EXTRA_ARGS="--fail-swap-on=false"

启动前把相关配置设置为开机自动启动:systemctl enable kubelet 、systemctl enable docker

启动kubeadm命令:kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12  --ignore-preflight-errors=Swap

修改网络配置:
echo "1" >/proc/sys/net/ipv4/ip_forward

echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

echo "1" >/proc/sys/net/bridge/bridge-nf-call-ip6tables

或者直接编辑文件 vim /etc/sysctl.conf加入下面两行配置即可,就不用每次修改iptables了

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1


 

由于kubeadm 镜像的网站 k8s.gr.io国外的,需要才能下载,国内的可以通过如下网站来下载相关的镜像
https://hub.docker.com/r/mirrorgooglecontainers/

kubeadm config images  查询需要哪些镜像文件

k8s.gcr.io/kube-apiserver:v1.15.0
k8s.gcr.io/kube-controller-manager:v1.15.0
k8s.gcr.io/kube-scheduler:v1.15.0
k8s.gcr.io/kube-proxy:v1.15.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.15.0
docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.0
docker pull mirrorgooglecontainers/kube-scheduler:v1.15.0
docker pull mirrorgooglecontainers/kube-proxy:v1.15.0
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1
//用来提供网络,跟coredns一样,这里使用flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 (仪表盘,用来显示图形监控docker集群的)



最后修改镜像名为:k8s.gcr.io/kube-apiserver:v1.15.0 这种类型的

docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.15.0 k8s.gcr.io/kube-apiserver:v1.15.0 

docker tag mirrorgooglecontainers/kube-controller-manager:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0

docker tag mirrorgooglecontainers/kube-scheduler:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0

docker tag mirrorgooglecontainers/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0

docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1

docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10

docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0  
k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0


下面是docker镜像注册中心:

{
 "registry-mirrors": ["https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","docker.io/mirrorgooglecontainers"]
}

接下来运行kubeadm :

//这是我运行成功的信息
[root@jason ~]# kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [jason.com localhost] and IPs [192.168.33.54 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [jason.com localhost] and IPs [192.168.33.54 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [jason.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.33.54]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 47.021223 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node jason.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node jason.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: p0uxl7.lx0szgjzt4i2y7cg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

//这两个命令必须执行,不然kubectl get nodes 命令执行不了
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
//这里就是可以直接把其他子节点加入到这个k8s集群中
kubeadm join 192.168.33.54:6443 --token p0uxl7.lx0szgjzt4i2y7cg \
    --discovery-token-ca-cert-hash sha256:b03a31835f36e278a0649bd0201fe6633f5e8b8d126879256b5d38b382444a1f 

/**创建集群时成功后面会出现这段命令,意思就是生成了admin.config的rbac角色配置文件,只有这个角色才可以访问集群中的nodes,pods等等,应为k8s
做了认证授权这块功能,如果想在其他节点中查看节点信息或者pod信息,必须要把这个主节点master生成的配置文件拷贝到子节点中去,才可以访问**/


注意:如果子节点加入到集群中的token失效,或者不记得,可以通过以下命令获取新的token
kubeadm token create --print-join-command

k8s 分布式集群docker容器管理_第5张图片

遇到的坑:

第一种:
[root@node1 ~]# kubeadm join 192.168.0.104:6443 --token tti5k7.t0unuw5g6hv9suk7     --discovery-token-ca-cert-hash sha256:314ef877e9c5c0a9287785a58cfb14e7d0d464681ddd73f344cd35a0f3378c91 --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

针对这种问题,我直接rubeadm reset 重置,再去执行join如命令。。。

第二种:发现hostname could not be reached 。
[root@node1 ~]# kubeadm join 192.168.0.104:6443 --token tti5k7.t0unuw5g6hv9suk7     --discovery-token-ca-cert-hash sha256:314ef877e9c5c0a9287785a58cfb14e7d0d464681ddd73f344cd35a0f3378c91 --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Hostname]: hostname "node1" could not be reached
	[WARNING Hostname]: hostname "node1": lookup node1 on 116.116.116.116:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

解决问题:研究半天,想了下,因为的我的centos7都是克隆的所以他主机名称都一样,所以通过kubectl get nodes 始终显示不出来我加入的子节点,后来修改了/etc/hostname 重新启动系统,再重新加入即可。

去master执行
[root@CentOS7 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
centos7   Ready    master   47m     v1.15.0
node1     Ready       9m19s   v1.15.0

[root@CentOS7 ~]# kubectl get  pods -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP              NODE      NOMINATED NODE   READINESS GATES
coredns-5c98db65d4-5p5h2          1/1     Running   0          47m     10.244.0.7      centos7              
coredns-5c98db65d4-9t6nk          1/1     Running   0          47m     10.244.0.6      centos7              
etcd-centos7                      1/1     Running   0          46m     192.168.0.104   centos7              
kube-apiserver-centos7            1/1     Running   0          46m     192.168.0.104   centos7              
kube-controller-manager-centos7   1/1     Running   0          46m     192.168.0.104   centos7              
kube-flannel-ds-amd64-8zdl9       1/1     Running   0          9m26s   192.168.0.105   node1                
kube-flannel-ds-amd64-dkwbd       1/1     Running   0          43m     192.168.0.104   centos7              
kube-proxy-7q5gx                  1/1     Running   0          9m26s   192.168.0.105   node1                
kube-proxy-k6n4g                  1/1     Running   0          47m     192.168.0.104   centos7              
kube-scheduler-centos7            1/1     Running   0          46m     192.168.0.104   centos7              

第三种:
[root@node2 ~]# systemctl start kubelet
[root@node2 ~]# kubeadm join 192.168.0.104:6443 --token jmmssi.htlyivb16smw9s0h \
>     --discovery-token-ca-cert-hash sha256:a0c394447b9a40dd2e4e1f59794d9f0847436637aacc33e300cd66564f01872e  --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@node2 ~]# 
[root@node2 ~]# rm -rf /etc/kubernetes/pki/ca.crt
[root@node2 ~]# rm -rf /etc/kubernetes/pki/ca.crt
[root@node2 ~]# 
[root@node2 ~]# 
[root@node2 ~]# rm -rf /etc/kubernetes/bootstrap-kubelet.conf
[root@node2 ~]# rm -rf /etc/kubernetes/kubelet.conf

经常会加入说存在一些证书,配置,直接删除掉即可。

第四种:

[root@CentOS7 ~]# kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
出现上述问题,说明已经启动了k8s,但是没有复制执行的命令行如下:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config //admin.conf为主节点上面的配置文件、在/etc/kubernetes目录下
执行了,才可以使用kubectl get nodes

第五种:在子节点中执行加入kubeadm join...报如下错误
[root@node2 ~]# kubeadm join 192.168.0.104:6443 --token jmmssi.htlyivb16smw9s0h     --discovery-token-ca-cert-hash sha256:a0c394447b9a40dd2e4e1f59794d9f0847436637aacc33e300cd66564f01872e  --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
解决方案:需要把主节点中的 admin.config文件拷贝到node自己点中存放起来,最后执行如下命令
[root@node2 system]#  mkdir -p $HOME/.kube
[root@node2 system]# cp -i admin.conf $HOME/.kube/config
再去kubeadm join...就可以了

第六种:kubectl join 过后 节点一直处于 没有连接状态:
node1    NotReady      24m   v1.15.0
查询发现安装的flannle失败了,没有安装上网卡信息 ifconfig看不到有flannel信息
解决:
mkdir -p /etc/cni/net.d/
cat < /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
EOF

mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/
cat < /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF

//这里可以不执行,如果实在不行,可以在执行
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml 

第七种:如果经常服务器关机,有重新启动,join 加入节点 出现如下错误 10250端口被占用,重新启动下
[root@node1 ~]# kubeadm join 192.168.33.52:6443 --token d0brol.m39hv6l241gzha1i     --discovery-token-ca-cert-hash sha256:65f82496a3196b33b02849526ae3607a0a0fda564ab44fe7ad575e60d7c60c77
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
	[ERROR Port-10250]: Port 10250 is in use
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@node1 ~]# kubeadm reset   





 

第二部:k8s中的kubectl命令学习


//在k8s节点上面运行应用程序, 端口为80,至少数量为1
kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1 
//查看运行的应用相关信息
kubectl get deployment
//查看运行的应用程序信息,跟上面差不多
kubectl get pod/pods

//这个命令查看运行的应用更多相关信息,包括ip,运行在哪个节点上面(常用)
kubectl get pods -o wide

//这里是显示节点中的标签信息
[root@master ~]# kubectl get pods  --show-labels
NAME                            READY   STATUS    RESTARTS   AGE   LABELS
client                          0/1     Error     0          28m   run=client
nginx-deploy-7689897d8d-74lmm   1/1     Running   0          16m   pod-template-hash=7689897d8d,run=nginx-deploy


//删除运行的应用,不过这里删除了,k8s控制器会自动创建运行一个新的nginx应用
kubectl delete pod nginx-deploy-7689897d8d-snfmk 

//所以这里有个问题,如果pod中的应用程序被删除了,控制器会运行新的应用,那对应的ip地址会发生变化,
//这时候我们经常不知道ip地址,所以没办法访问,所以要创建一个暴露的服务来映射pod中的应用ip地址
//直接访问暴露的服务ip地址就可以了,不管以后pod是否变化,都会映射对应的端口
//应为他们之间只通过标签名称来关联的

[root@master home]# kubectl expose  deployment nginx-deploy --name=nginx  --port=80 --target-port=80 --protocol=TCP 
//说明暴露服务以及成功
service/nginx exposed

//查看暴露的服务运行情况
[root@master home]# kubectl get  svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1               443/TCP   110m
nginx        ClusterIP   10.110.244.67           80/TCP    2m30s
//查看这些暴露的服务关联对应的pod节点中的相关信息,通过labels标签绑定对应的pod应用
[root@master home]# kubectl describe svc nginx
Name:              nginx
Namespace:         default
Labels:            run=nginx-deploy
Annotations:       
Selector:          run=nginx-deploy
Type:              ClusterIP
IP:                10.110.244.67
Port:                80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.6:80
Session Affinity:  None
Events:            
//这个事查看pod上面创建的应用程序详情,类似docker中的 docker inspect 容器(这个常用)
[root@master home]# kubectl describe pod nginx-deploy
//这个命令是给pod中的应用扩展数量的
[root@master ~]# kubectl  scale --replicas=2 Deployment nginx-deploy
deployment.extensions/nginx-deploy scaled
//这个事升级k8s中pod节点中应用程序版本,它会自动一个个的去升级
[root@master home]# kubectl set image  deployment  nginx-deploy nginx-deploy=nginx:1.15-alpine

//deployment.extensions/nginx-deploy image updated
//这个是回滚状态
[root@master home]# kubectl rollout status deployment nginx-deploy
deployment "nginx-deploy" successfully rolled out


//我创建了两个nginx,k8s会自动创建在不同的节点上面,服务会绑定多个节点中的应用,看下描述的ip地址就可以知道了
[root@master home]# kubectl get nodes -o wide
NAME     STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
master   Ready    master   7h45m   v1.15.0   192.168.0.104           CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://1.13.1
node1    Ready       7h39m   v1.15.0   192.168.0.106           CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.6
node2    Ready       28m     v1.15.0   192.168.0.120           CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
[root@master home]# kubectl describe svc nginx
Name:                     nginx
Namespace:                default
Labels:                   run=nginx-deploy
Annotations:              
Selector:                 run=nginx-deploy
Type:                     NodePort
IP:                       10.99.105.250
Port:                       80/TCP
TargetPort:               80/TCP
NodePort:                   32047/TCP
Endpoints:                10.244.1.8:80,10.244.2.3:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   





 

如果外网可以访问k8s集群中的节点应用可以如下设置

##修改里面的type:ClusterIP 改为 type:NodePort,即节点端口,意思是节点对外暴露的端口号
[root@master home]# kubectl edit svc nginx
##再去查看会发现有对应的80:32047/TCP 映射地址,
[root@master home]# kubectl get svc 
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1               443/TCP        3h22m
nginx        NodePort    10.99.105.250           80:32047/TCP   43m

#最后就可以通过外网访问不同节点应用了:http://192.168.0.106:32047/,http://192.168.0.120:32047/

第三部:k8s认证及serviceaccount(创建jason账户)

//每个pod都有对应的服务,这个服务主要用来于apiserver链接认证使用的,
//可以在yaml资源清单配置中指定,服务账号我们也可以自己创建
kubectl create serviceaccount mysa -o yaml ,
查看sa账号:kubectl get sa

//查看当前k8s配置信息,包括clusters(集群信息)、contexts(上下文,哪个集群被哪个用户使用)、users:有哪些用户
[root@master pki]# kubectl config view
apiVersion: v1
clusters://集群信息,有多个集群信息
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.0.104:6443
  name: kubernetes
contexts:
- context: //指定哪个集群被哪个用户访问
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users://用户信息,指定有多个用户信息在这个配置文件中
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

//现在我们自己创建一个认证的证书,通过ca自带的证书签名创建,到/etc/kubernetes/pki目录下
1. (umask 077;openssl genrsa -out jason.key 2048)//创建jason.key
2.openssl req -new -key jason.key -out jason.csr -subj "/CN=jason"//创建用户名jason的jason.csr认证
3.openssl x509 -req -in jason.csr -CA ca.crt -CAkey ca.key -out jason.crt -CAcreateserial -days 366 //通过ca证书来签名生成jason.crt证书,期限366天
//jason证书已经创建成功。
//我们可以通过命令:openssl x509 -in jason.crt  -text -noout 查看输出内容
//接下来我们把创建的jason证书添加到配置config文件的users中去
kubectl config set-credentials jason  --client-certificate=jason.crt --client-key=jason.key --embed-certs=true
/****设置配置文件关联的账户和集群信息****/
kubectl config set-context kubernetes-jason@kubernetes  --cluster=kubernetes --user=jason
/**********最后我们切换到刚刚创建的jason用户上去************/
kubectl config use-context  kubernetes-jason@kubernetes
//去访问节点信息
kubectl get pods 
出现如下错误:error from server (Forbidden): pods is forbidden: User "jason" cannot list resource "pods" in API group "" in the namespace "default" 说明创建jason认证没有权限的

//下面是设置成功的配置信息 查看命令: kubectl config view /
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.0.104:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
- context:
    cluster: kubernetes
    user: jason
  name: kubernetes-jason@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: jason
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
[root@master .kube]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.0.104:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
- context: //这是新添加上下文,关联kubernetes名称
    cluster: kubernetes
    user: jason
  name: kubernetes-jason@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: jason //这是新添加的用户
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED





第四部:k8s中的RBAC授权

第一种:RBAC-->(role、rolebinding)
//把第三步创建的jason账号,通过rolebinding来绑定到rbac创建的角色role中去,使jason账号拥有读取节点的权限
1.先创建一个role的yaml文件,在清单文件夹下面创建。
[root@master mainfests]# kubectl create role pod-reader  --verb=get,list,watch --resource=pods,pods/status --dry-run -o yaml > role-demo.yaml

2.应用刚刚创建的role-demo.yaml文件
[root@master mainfests]# kubectl create -f role-demo.yaml 

3.可以查看刚刚创建的role或描述
[root@master mainfests]# kubectl get role
NAME         AGE
pod-reader   12s
[root@master mainfests]# kubectl describe role pod-reader

4.通过rolebinding来绑定jason用户,先创建绑定文件,再去应用
[root@master mainfests]# kubectl create rolebinding jason-read-pods --role=pod-reader --user=jason --dry-run -o yaml > rolebingding-demo.yaml
[root@master mainfests]# kubectl apply -f rolebingding-demo.yaml 

5.最后切换jason户,查询节点信息。
 kubectl config use-context  kubernetes-jason@kubernetes

第二种:RBAC-->(clusterrole、clusterrolebinding)

第五步:部署k8s dashboard web端管理界面

文章目录

    • 1 token令牌认证登录
    • 2 kubeconfig配置文件登录
第一步:安装
这里我是下载:wegt https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
修改里面的内容,暴露一个端口出来
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
nodePort: 30000
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard


总是出现pod在ImagePullBackOff状态,发现原来这个仪表盘安装到了node上面了,node上面没有对应的镜像文件。
首先我是通过yaml来安装的,这个yaml需要下载仪表板文件到docker中去的,但是仪表板是国外的网站不能能访问,
所以我修改了yaml文件,在容器下面,加了:imagePullPolicy: IfNotPresent(意思:如果镜像拉不下来,就可以用本地的来加载)。
结果安装成功,却始终运行失败,最后发现安装在节点上了,所以需要在节点中下载仪表板文件。
docker tag docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64   k8s-dashboard

第二部:访问
kubernetes-dashboard 部署完后,在google浏览器上面访问,居然打不开,以下命令是解决访问问题的。
mkdir key && cd key
#生成证书
openssl genrsa -out dashboard.key 2048 
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.0.106'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt -CAcreateserial -days 366
#删除原有的证书secret
kubectl delete secret kubernetes-dashboard-certs -n kube-system
#创建新的证书secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
#查看pod
kubectl get pod -n kube-system
#重启pod
kubectl delete pod  -n kube-system // 节点上面跑的kubernetes-dashboard应用


1 token令牌认证登录(常用

(1)创建serviceaccount

[root@master1 pki]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@master1 pki]# kubectl get sa -n kube-system
NAME                                 SECRETS   AGE
......
dashboard-admin                      1         13s
......

(2)把serviceaccount绑定在clusteradmin,授权serviceaccount用户具有整个集群的访问管理权限

[root@master1 pki]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created

(3)获取serviceaccount的secret信息,可得到token(令牌)的信息

[root@master1 pki]# kubectl get secret -n kube-system
NAME                                             TYPE                                  DATA     
......    
daemon-set-controller-token-t4jhj                kubernetes.io/service-account-token   3   
......
[root@master1 pki]# kubectl describe secret dashboard-admin-token-lg48q -n kube-system
Name:         dashboard-admin-token-lg48q
Namespace:    kube-system
Labels:       
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 3cf69e4e-2458-11e9-81cc-000c291e37c2

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbGc0OHEiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2NmNjllNGUtMjQ1OC0xMWU5LTgxY2MtMDAwYzI5MWUzN2MyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.gMdqXvyP3ClIL0eo8061UnK8QbEgdAeVZV92GMxJlxhs8aK8c278e8yNWzx68LvySg1ciXDI7Pqlo9caUL2K8tC2BRvLvarbgvhPnFlRvYrm6bO1PdD2XSg60JTkPxX_AXRrQG2kAAf3C3cbTgKEPvoX5fwvXgGLWsJ1rX0vStSBCsLlSJkTmoDp9rdYD1AU-32lN1eNfFueIIY8tIpeP7_eYdfvwSXnsbqXxr9K7zD6Zu7QM1T1tG0X0-D0MHKNDGP_YQ7S2ANo3FDd7OUiitGQRA1H7cO_LF7M_BKtzotBVCEbOGjNmnaVuL4y5XXvP![在这里插入图片描述](https://img-blog.csdnimg.cn/20190225095228592.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L01yX3JzcQ==,size_16,color_FFFFFF,t_70)0JHtlQxpnBzAOU9V9-tRw

(4)通过patch暴露端口(这步可以设置type:NodePort 就可以了)

[root@master1 pki]# kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system
service/kubernetes-dashboard patched
[root@master1 pki]# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.96.0.10            53/UDP,53/TCP   49d
kubernetes-dashboard   NodePort    10.99.54.66           443:32639/TCP   10m

(5)浏览器访问登录,把token粘贴进去登录即可

k8s 分布式集群docker容器管理_第6张图片

2 kubeconfig配置文件登录

创建一个只能对default名称空间有权限的serviceaccount,这个token文件会变化的,如果机器重启的会变,所以要查询它

[root@master1 pki]# kubectl create serviceaccount def-ns-admin -n default
serviceaccount/def-ns-admin created
[root@master1 pki]# kubectl create rolebinding def-ns-admin --clusterrole=admin --serviceaccount=default:def-ns-admin
rolebinding.rbac.authorization.k8s.io/def-ns-admin created
[root@master1 pki]# kubectl get secret
NAME                       TYPE                                  DATA   AGE
admin-token-bwrbg          kubernetes.io/service-account-token   3      5d1h
def-ns-admin-token-xdvx5   kubernetes.io/service-account-token   3      2m9s
default-token-87nlt        kubernetes.io/service-account-token   3      49d
tomcat-ingress-secret      kubernetes.io/tls                     2      21d
[root@master1 pki]# kubectl describe secret def-ns-admin-token-xdvx5
Name:         def-ns-admin-token-xdvx5
Namespace:    default
Labels:       
Annotations:  kubernetes.io/service-account.name: def-ns-admin
              kubernetes.io/service-account.uid: 928bbca1-245c-11e9-81cc-000c291e37c2

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi14ZHZ4NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5MjhiYmNhMS0yNDVjLTExZTktODFjYy0wMDBjMjkxZTM3YzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.EzUF13MElI8b-kuQNh_u1hGQpxgoffm4LdTVoeORKUBTADwqHEtW2arj76oZuI-wQyy5P0v5VvOoefr6h3NpIgbAze8Lqyrpg9wO0Crfi30IE1kZ2wUPYU9P_5inMxmCPLttppyPyc8mQKDkOOB1xFUmEebC63my-dG4CZljsd8zwNU6eXnhaThSUUn12UTvRsbSBLD-dvau1OY6YgDL6mgFl3cVqzCPd7ELpEyNYWCh3x5rcRfCcjcHGfUOrWjDzhgmH6sUiWb4gMHvSKgp-35rj5LXERfebse3OxSAXODJw9FhSn15VCmYcDmCJzMN83emFBwn0Y7bb11Y6M8CrQ

这种情况下的权限较小,用token登陆后只能对default名称空间有权限

[root@master1 pki]# kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://10.0.0.100:6443" --embed-certs=true --kubeconfig=/root/def-ns-admin.conf
Cluster "kubernetes" set.
[root@master1 pki]# kubectl config view --kubeconfig=/root/def-ns-admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.0.0.100:6443
  name: kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
[root@master1 ~]# kubectl get secret
NAME                       TYPE                                  DATA   AGE
def-ns-admin-token-xdvx5   kubernetes.io/service-account-token   3      5d
[root@master1 ~]# kubectl describe secret def-ns-admin-token-xdvx5
Name:         def-ns-admin-token-xdvx5
Namespace:    default
Labels:       
Annotations:  kubernetes.io/service-account.name: def-ns-admin
              kubernetes.io/service-account.uid: 928bbca1-245c-11e9-81cc-000c291e37c2

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi14ZHZ4NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5MjhiYmNhMS0yNDVjLTExZTktODFjYy0wMDBjMjkxZTM3YzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.EzUF13MElI8b-kuQNh_u1hGQpxgoffm4LdTVoeORKUBTADwqHEtW2arj76oZuI-wQyy5P0v5VvOoefr6h3NpIgbAze8Lqyrpg9wO0Crfi30IE1kZ2wUPYU9P_5inMxmCPLttppyPyc8mQKDkOOB1xFUmEebC63my-dG4CZljsd8zwNU6eXnhaThSUUn12UTvRsbSBLD-dvau1OY6YgDL6mgFl3cVqzCPd7ELpEyNYWCh3x5rcRfCcjcHGfUOrWjDzhgmH6sUiWb4gMHvSKgp-35rj5LXERfebse3OxSAXODJw9FhSn15VCmYcDmCJzMN83emFBwn0Y7bb11Y6M8CrQ

[root@master1 pki]# kubectl config set-credentials def-ns-admin --token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi14ZHZ4NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5MjhiYmNhMS0yNDVjLTExZTktODFjYy0wMDBjMjkxZTM3YzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.EzUF13MElI8b-kuQNh_u1hGQpxgoffm4LdTVoeORKUBTADwqHEtW2arj76oZuI-wQyy5P0v5VvOoefr6h3NpIgbAze8Lqyrpg9wO0Crfi30IE1kZ2wUPYU9P_5inMxmCPLttppyPyc8mQKDkOOB1xFUmEebC63my-dG4CZljsd8zwNU6eXnhaThSUUn12UTvRsbSBLD-dvau1OY6YgDL6mgFl3cVqzCPd7ELpEyNYWCh3x5rcRfCcjcHGfUOrWjDzhgmH6sUiWb4gMHvSKgp-35rj5LXERfebse3OxSAXODJw9FhSn15VCmYcDmCJzMN83emFBwn0Y7bb11Y6M8CrQ --kubeconfig=/root/def-ns-admin.conf
User "def-ns-admin" set.

# 设置context
[root@master1 pki]# kubectl config set-context def-ns-admin@kubernetes --cluster=kubernetes --user=def-ns-admin --kubeconfig=/root/def-ns-admin.conf
Context "def-ns-admin@kubernetes" created.

# use-context
[root@master1 pki]# kubectl config use-context def-ns-admin@kubernetes --kubeconfig=/root/def-ns-admin.conf
Switched to context "def-ns-admin@kubernetes".

# 查看conf文件,此时已经完整了
[root@master1 pki]# kubectl config view --kubeconfig=/root/def-ns-admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.0.0.100:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: def-ns-admin
  name: def-ns-admin@kubernetes
current-context: def-ns-admin@kubernetes
kind: Config
preferences: {}
users:
- name: def-ns-admin
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi14ZHZ4NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5MjhiYmNhMS0yNDVjLTExZTktODFjYy0wMDBjMjkxZTM3YzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.EzUF13MElI8b-kuQNh_u1hGQpxgoffm4LdTVoeORKUBTADwqHEtW2arj76oZuI-wQyy5P0v5VvOoefr6h3NpIgbAze8Lqyrpg9wO0Crfi30IE1kZ2wUPYU9P_5inMxmCPLttppyPyc8mQKDkOOB1xFUmEebC63my-dG4CZljsd8zwNU6eXnhaThSUUn12UTvRsbSBLD-dvau1OY6YgDL6mgFl3cVqzCPd7ELpEyNYWCh3x5rcRfCcjcHGfUOrWjDzhgmH6sUiWb4gMHvSKgp-35rj5LXERfebse3OxSAXODJw9FhSn15VCmYcDmCJzMN83emFBwn0Y7bb11Y6M8CrQ

拷贝到本地,使用conf文件登录

å¨è¿éæå¥å¾çæè¿°

第六部分:volume 存储卷

   

k8s存储卷有如下几种类型:
emptyDir、hostPath、nfs、pvc、gitRepo
1.emptyDir
###这是挂载的demo
apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: gcr.io/google_containers/test-webserver
    name: test-container
    volumeMounts:
    -  name: cache-volume
       mountPath: /cache
  volumes:
  - name: cache-volume
    emptyDir: {}  
2.hostPaht
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-hostpath
  namespace: default
spec:
  containers:
  - name: busyboxvol
    image: busybox:v12
    volumeMounts:
    - name: html
      mountPath: /data/
  volumes:
  - name: html
    hostPath:
      path: /home/data/vol
      type: DirectoryOrCreate
3.nfs需要用一台机子做共享,共享目录
 a.yum install nfs-utils -y (在每个机上面安装)
 b.在要共享的主机上面创建 共享目录 mkdir /data/volumes
 c.配置共享目录 vim /etc/exports
   内容:/data/volumes  192.168.33.2/16(rw,no_root_squash)
 d.在其他节点机子上面 mount -t nfs master:/data/volumes /mnt
最后 执行mount命令发现会有刚刚添加进去的共享目录
master:/data/volumes on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.33.35,local_lock=none,addr=192.168.33.52)
# 持久化挂接位置,在docker中 
 volumeMounts:
   - name: redis-persistent-storage
      mountPath: /data
      volumes:      
# 宿主机上的目录
- name: redis-persistent-storage
  nfs:
    path: /k8s-nfs/redis/data
    server: 192.168.8.150
4.pvc 部署难度有点大,暂时不做讲解

 

 

k8s 分布式集群docker容器管理_第7张图片 

k8s 分布式集群docker容器管理_第8张图片

参考地址:https://blog.csdn.net/shenhonglei1234/article/details/80803489

你可能感兴趣的:(Linux系统)