简介
kubernetes简称k8s。是用于自动部署,扩展和管理容器化应用程序的开源系统。
中文官网:https://kubernetes.io/Zh/
中文社区:https://www.kubernetes.org.cn/
官方文档:https://kubernetes.io/zh/docs/home/
社区文档:https://docs.kubernetes.org.cn/
参考:
https://blog.csdn.net/hancoder/article/details/107612802
快速体验
1、安装minikube
https://github.com/kubernetes/minikube/releases
下载minikuber-windows-amd64.exe 改名为minikube.exe
打开virtualBox,打开cmd
运行
minikube start --vm-driver=virtualbox --registry-mirror=https://registry.docker-cn.com
等待20分钟即可。
2、体验nginx部署升级
- 提交一个nginx deployment
kubectl apply -f https://k8s.io/examples/application/deployment.yaml - 升级 nginx deployment
kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml - 扩容 nginx deployment
K8s集群安装
前置要求
一台或多台机器,操作系统Centos7.x-86_x64
硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
集群中所有的机器之间网络互通
可以访问外网,需要拉取镜像
禁止Swap分区
部署步骤
- 在所有的节点上安装Docker和kubeadm
- 不是Kubernetes Master
- 部署容器网络插件
- 部署Kubernetes Node,将节点加入Kubernetes集群中
- 部署DashBoard web页面,可视化查看Kubernetes资源
环境准备
启动三个虚拟机
使用vagrant安装虚拟机环境
其实vagrant完全可以一键部署全部K8s集群
https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster
http://github.com/davidkbainbridge/k8s-playground
(3)设置Linux环境(三个节点都执行)
- 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
- 关闭Linux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
- 关闭swap
swapoff -a #临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭
free -g #验证,swap必须为0
- 添加主机名与IP对应关系:
查看主机名:
hostname
如果主机名不正确,可以通过“hostnamectl set-hostname
vi /etc/hosts
10.0.2.9 k8s-node1
10.0.2.10 k8s-node2
10.0.2.11 k8s-node3
将桥接的IPV4流量传递到iptables的链:
cat > /etc/sysctl.d/k8s.conf <
应用规则:
sysctl --system
疑难问题:遇见提示是只读的文件系统,运行如下命令
mount -o remount rw /
- date 查看时间(可选)
yum -y install ntpupdate
ntpupdate time.window.com #同步最新时间
5)所有节点安装docker、kubeadm、kubelet、kubectl
Kubenetes默认CRI(容器运行时)为Docker,因此先安装Docker。
(1)安装Docker
1、卸载之前的docker
$ sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
2、安装Docker -CE
$ sudo yum install -y yum-utils
$ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum -y install docker-ce docker-ce-cli containerd.io
3、配置镜像加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://ke9h1pt4.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
4、启动Docker && 设置docker开机启动
systemctl enable docker
(2)添加阿里与Yum源
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
更多详情见: https://developer.aliyun.com/mirror/kubernetes
(3)安装kubeadm,kubelet和kubectl
yum list|grep kube
安装
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
开机启动
systemctl enable kubelet && systemctl start kubelet
查看kubelet的状态:
systemctl status kubelet
查看kubelet版本:
[root@k8s-node2 ~]# kubelet --version
Kubernetes v1.17.3
6)部署k8s-master
(1)master节点初始化
在Master节点上,创建并执行master_images.sh,若无执行权限使用命令:chmod 777 master_images.sh
#!/bin/bash
images=(
kube-apiserver:v1.17.3
kube-proxy:v1.17.3
kube-controller-manager:v1.17.3
kube-scheduler:v1.17.3
coredns:1.6.5
etcd:3.4.3-0
pause:3.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done
初始化kubeadm
kubeadm init \
--apiserver-advertise-address=10.0.2.9 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.17.3 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16
注:
- --apiserver-advertise-address=10.0.2.9 这里的IP地址是master主机的地址,为上面的eth0网卡的地址;
执行结果:
[root@k8s-node1 opt]# kubeadm init \
> --apiserver-advertise-address=10.0.2.9 \
> --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
> --kubernetes-version v1.17.3 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=10.244.0.0/16
W0503 14:07:12.594252 10124 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0503 14:07:30.908642 10124 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0503 14:07:30.911330 10124 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.506521 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: sg47f3.4asffoi6ijb8ljhq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
#表示kubernetes已经初始化成功了
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
这个拷贝下来后面用到。
kubeadm join 10.0.2.15:6443 --token sg47f3.4asffoi6ijb8ljhq \
--discovery-token-ca-cert-hash sha256:81fccdd29970cbc1b7dc7f171ac0234d53825bdf9b05428fc9e6767436991bfb
[root@k8s-node1 opt]#
由于默认拉取镜像地址k8s.cr.io国内无法访问,这里指定阿里云仓库地址。可以手动按照我们的images.sh先拉取镜像。
地址变为:registry.aliyuncs.com/googole_containers也可以。
科普:无类别域间路由(Classless Inter-Domain Routing 、CIDR)是一个用于给用户分配IP地址以及在互联网上有效第路由IP数据包的对IP地址进行归类的方法。
拉取可能失败,需要下载镜像。
运行完成提前复制:加入集群的令牌。
(2)测试Kubectl(主节点执行)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
详细部署文档:https://kubernetes.io/docs/concepts/cluster-administration/addons/
kubectl get nodes #获取所有节点
目前Master状态为notready。等待网络加入完成即可。
journalctl -u kubelet #查看kubelet日志
这个拷贝下来后面用到。
kubeadm join 10.0.2.15:6443 --token sg47f3.4asffoi6ijb8ljhq \
--discovery-token-ca-cert-hash sha256:81fccdd29970cbc1b7dc7f171ac0234d53825bdf9b05428fc9e6767436991bfb
7)安装POD网络插件(CNI)
在master节点上执行按照POD网络插件
kubectl apply -f \
https://raw.githubusercontent.com/coreos/flanne/master/Documentation/kube-flannel.yml
以上地址可能被墙,可以直接获取本地已经下载的flannel.yml运行即可,如:
[root@k8s-node1 k8s]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
删除:kubectl delete -f kube-flannel.yml
同时flannel.yml中指定的images访问不到可以去docker hub找一个wget yml地址
vi 修改yml 所有amd64的地址修改了即可
等待大约3分钟
kubectl get ns 查询名称空间
kubectl get pods -n kube-system 查看指定名称空间的pods
kubectl get pods --all-namespaces 查看所有名称空间的pods
$ ip link set cni0 down 如果网络出现问题,关闭cni0,重启虚拟机继续测试
执行 watch kubectl get pod -n kube-system -o wide 监控pod进度
等待3-10分钟,完全都是running以后继续
查看命名空间:
[root@k8s-node1 k8s]# kubectl get ns
NAME STATUS AGE
default Active 30m
kube-node-lease Active 30m
kube-public Active 30m
kube-system Active 30m
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-546565776c-9sbmk 0/1 Pending 0 31m
kube-system coredns-546565776c-t68mr 0/1 Pending 0 31m
kube-system etcd-k8s-node1 1/1 Running 0 31m
kube-system kube-apiserver-k8s-node1 1/1 Running 0 31m
kube-system kube-controller-manager-k8s-node1 1/1 Running 0 31m
kube-system kube-flannel-ds-amd64-6xwth 1/1 Running 0 2m50s
kube-system kube-proxy-sz2vz 1/1 Running 0 31m
kube-system kube-scheduler-k8s-node1 1/1 Running 0 31m
[root@k8s-node1 k8s]#
查看master上的节点信息:
[root@k8s-node1 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready master 34m v1.17.3 #status为ready才能够执行下面的命令
[root@k8s-node1 k8s]#
最后再次执行,并且分别在“k8s-node2”和“k8s-node3”上也执行这里命令:
kubeadm join 10.0.2.15:6443 --token sg47f3.4asffoi6ijb8ljhq \
--discovery-token-ca-cert-hash sha256:81fccdd29970cbc1b7dc7f171ac0234d53825bdf9b05428fc9e6767436991bfb
[root@k8s-node1 opt]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready master 47m v1.17.3
k8s-node2 NotReady 75s v1.17.3
k8s-node3 NotReady 76s v1.17.3
监控pod进度
watch kubectl get pod -n kube-system -o wide
等到所有的status都变为running状态后,再次查看节点信息:
[root@k8s-node1 ~]# kubectl get nodes;
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready master 3h50m v1.17.3
k8s-node2 Ready 3h3m v1.17.3
k8s-node3 Ready 3h3m v1.17.3
8)加入kubenetes的Node节点
在node节点中执行,向集群中添加新的节点,执行在kubeadm init 输出的kubeadm join命令;
确保node节点成功。
token过期怎么办
kubeadm token create --print-join-command
9)入门操作kubernetes集群
1、在主节点上部署一个nginx
kubectl create deployment nginx --image=nginx
获取所有的资源:
[root@k8s-node1 k8s]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-86c57db685-zrts7 0/1 ContainerCreating 0 9s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 110m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 0/1 1 0 9s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-86c57db685 1 1 0 9s
kubectl get pods -o wide 可以获取到tomcat部署信息,能够看到它被部署到了k8s-node3上了
[root@k8s-node1 k8s]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-86c57db685-zrts7 1/1 Running 0 49s 10.244.2.4 k8s-node3
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-86c57db685-zrts7 1/1 Running 0 86s 10.244.2.4 k8s-node3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 443/TCP 111m
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/nginx 1/1 1 1 86s nginx nginx app=nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/nginx-86c57db685 1 1 1 86s nginx nginx app=nginx,pod-template-hash=86c57db685
查看node3节点上,下载了哪些镜像:
[root@k8s-node3 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 4bb46517cac3 12 days ago 133MB
tomcat latest 2ae23eb477aa 2 weeks ago 647MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.3 ae853e93800d 6 months ago 116MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 19 months ago 52.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
查看Node3节点上,正在运行的容器:
[root@k8s-node3 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7f8db6d37a1c nginx "/docker-entrypoint.…" 3 minutes ago Up 3 minutes k8s_nginx_nginx-86c57db685-zrts7_default_581c8805-0dcb-470b-b29e-c90feacf4fa1_0
0b81a9002e63 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_nginx-86c57db685-zrts7_default_581c8805-0dcb-470b-b29e-c90feacf4fa1_0
b9ca292ad8b5 ff281650a721 "/opt/bin/flanneld -…" 2 hours ago Up 2 hours k8s_kube-flannel_kube-flannel-ds-amd64-pk9bf_kube-system_b3b033b8-cbd9-45d2-a36b-2f669c8f95bc_1
c1c5a0279fe6 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube…" 2 hours ago Up 2 hours k8s_kube-proxy_kube-proxy-ltt6w_kube-system_75d0dcd5-a21c-4d37-97e2-fa258bbc2d3c_0
e14ed7abb6b8 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-proxy-ltt6w_kube-system_75d0dcd5-a21c-4d37-97e2-fa258bbc2d3c_0
a751e6e8e49e registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-flannel-ds-amd64-pk9bf_kube-system_b3b033b8-cbd9-45d2-a36b-2f669c8f95bc_0
在node1上执行:
[root@k8s-node1 k8s]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-86c57db685-zrts7 1/1 Running 0 4m35s
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-86c57db685-zrts7 1/1 Running 0 4m46s
kube-system coredns-7f9c544f75-k9dsq 1/1 Running 0 114m
kube-system coredns-7f9c544f75-wrzzv 1/1 Running 0 114m
kube-system etcd-k8s-node1 1/1 Running 0 114m
kube-system kube-apiserver-k8s-node1 1/1 Running 0 114m
kube-system kube-controller-manager-k8s-node1 1/1 Running 0 114m
kube-system kube-flannel-ds-amd64-5kmxc 1/1 Running 0 108m
kube-system kube-flannel-ds-amd64-bjvnl 1/1 Running 0 103m
kube-system kube-flannel-ds-amd64-pk9bf 1/1 Running 1 101m
kube-system kube-proxy-ltt6w 1/1 Running 0 101m
kube-system kube-proxy-tc48c 1/1 Running 0 114m
kube-system kube-proxy-vqhfj 1/1 Running 0 103m
kube-system kube-scheduler-k8s-node1 1/1 Running 0 114m
2、暴露访问
在master上执行
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
pod的80映射容器的80
查看服务:
[root@k8s-node1 k8s]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 118m
nginx NodePort 10.96.8.77 80:31008/TCP 22s
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 443/TCP 119m
nginx NodePort 10.96.8.77 80:31008/TCP 35s app=nginx
访问 http://192.168.56.100:31008/
[root@k8s-node1 ~]# kubectl get all
3、动态扩容测试
应用升级: kubectl set image (--help查看帮助)
扩容:kubectl scale --replicas=3 deployment nginx
[root@k8s-node1 k8s]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 10m
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]# kubectl scale --replicas=3 deployment nginx 扩容
deployment.apps/nginx scaled
[root@k8s-node1 k8s]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/3 3 1 11m
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-86c57db685-5bgj8 1/1 Running 0 2m1s 10.244.2.5 k8s-node3
nginx-86c57db685-hv2cc 1/1 Running 0 2m1s 10.244.1.4 k8s-node2
nginx-86c57db685-zrts7 1/1 Running 0 13m 10.244.2.4 k8s-node3
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 443/TCP 122m
nginx NodePort 10.96.8.77 80:31008/TCP 3m57s app=nginx
扩容了多份,所有无论访问哪个node的指定端口,都可以访问到 nginx。
http://192.168.56.101:31008/
http://192.168.56.102:31008/
缩容:kubectl scale --replicas=2 deployment nginx
[root@k8s-node1 k8s]# kubectl scale --replicas=2 deployment nginx
deployment.apps/nginx scaled
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-86c57db685-hv2cc 1/1 Running 0 4m51s 10.244.1.4 k8s-node2
nginx-86c57db685-zrts7 1/1 Running 0 16m 10.244.2.4 k8s-node3
删除pod和service
[root@k8s-node1 k8s]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-86c57db685-hv2cc 1/1 Running 0 5m52s
pod/nginx-86c57db685-zrts7 1/1 Running 0 17m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 127m
service/nginx NodePort 10.96.8.77 80:31008/TCP 8m53s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 2/2 2 2 17m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-86c57db685 2 2 2 17m
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]# kubectl delete deployment.apps/nginx
[root@k8s-node1 k8s]# kubectl delete service/nginx
3、K8s细节
1、kubectl文档
https://kubernetes.io/zh/docs/reference/kubectl/overview/
2、资源类型
https://kubernetes.io/zh/docs/reference/kubectl/overview/#%e8%b5%84%e6%ba%90%e7%b1%bb%e5%9e%8b
3、格式化输出
https://kubernetes.io/zh/docs/reference/kubectl/overview/
所有
kubectl
命令的默认输出格式都是人类可读的纯文本格式。要以特定格式向终端窗口输出详细信息,可以将-o
或--output
参数添加到受支持的kubectl
命令中。语法
kubectl [command] [TYPE] [NAME] -o=
根据
kubectl
操作,支持以下输出格式:
Output format Description -o custom-columns=
使用逗号分隔的自定义列列表打印表。 -o custom-columns-file=
使用 `` 文件中的自定义列模板打印表。 -o json
输出 JSON 格式的 API 对象 `-o jsonpath= 打印 jsonpath 表达式定义的字段 -o jsonpath-file=
打印 `` 文件中 jsonpath 表达式定义的字段。 -o name
仅打印资源名称而不打印任何其他内容。 -o wide
以纯文本格式输出,包含任何附加信息。对于 pod 包含节点名。 -o yaml
输出 YAML 格式的 API 对象。 示例
在此示例中,以下命令将单个 pod 的详细信息输出为 YAML 格式的对象:
kubectl get pod web-pod-13je7 -o yaml
请记住:有关每个命令支持哪种输出格式的详细信息,请参阅 kubectl 参考文档。
--dry-run:
--dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be
sent, without sending it. If server strategy, submit server-side request without persisting the resource.
值必须为none,server或client。如果是客户端策略,则只打印该发送对象,但不发送它。如果服务器策略,提交服务器端请求而不持久化资源。
也就是说,通过--dry-run选项,并不会真正的执行这条命令。
[root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml
W0504 03:39:08.389369 8107 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: tomcat6
name: tomcat6
spec:
replicas: 1
selector:
matchLabels:
app: tomcat6
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: tomcat6
spec:
containers:
- image: tomcat:6.0.53-jre8
name: tomcat
resources: {}
status: {}
[root@k8s-node1 ~]#
实际上我们也可以将这个yaml输出到文件,然后使用kubectl apply -f来应用它
#输出到tomcat6.yaml
[root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6.yaml
W0504 03:46:18.180366 11151 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
#修改副本数为3
[root@k8s-node1 ~]# cat tomcat6.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: tomcat6
name: tomcat6
spec:
replicas: 3 #修改副本数为3
selector:
matchLabels:
app: tomcat6
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: tomcat6
spec:
containers:
- image: tomcat:6.0.53-jre8
name: tomcat
resources: {}
status: {}
#应用tomcat6.yaml
[root@k8s-node1 ~]# kubectl apply -f tomcat6.yaml
deployment.apps/tomcat6 created
[root@k8s-node1 ~]#
查看pods:
[root@k8s-node1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
tomcat6-7b84fb5fdc-5jh6t 1/1 Running 0 8s
tomcat6-7b84fb5fdc-8lhwv 1/1 Running 0 8s
tomcat6-7b84fb5fdc-j4qmh 1/1 Running 0 8s
[root@k8s-node1 ~]#
查看某个pod的具体信息:
[root@k8s-node1 ~]# kubectl get pods tomcat6-7b84fb5fdc-5jh6t -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-05-04T03:50:47Z"
generateName: tomcat6-7b84fb5fdc-
labels:
app: tomcat6
pod-template-hash: 7b84fb5fdc
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:generateName: {}
f:labels:
.: {}
f:app: {}
f:pod-template-hash: {}
f:ownerReferences:
.: {}
k:{"uid":"292bfe3b-dd63-442e-95ce-c796ab5bdcc1"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:spec:
f:containers:
k:{"name":"tomcat"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:enableServiceLinks: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kube-controller-manager
operation: Update
time: "2020-05-04T03:50:47Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ContainersReady"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"Initialized"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"Ready"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
f:containerStatuses: {}
f:hostIP: {}
f:phase: {}
f:podIP: {}
f:podIPs:
.: {}
k:{"ip":"10.244.2.7"}:
.: {}
f:ip: {}
f:startTime: {}
manager: kubelet
operation: Update
time: "2020-05-04T03:50:49Z"
name: tomcat6-7b84fb5fdc-5jh6t
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: tomcat6-7b84fb5fdc
uid: 292bfe3b-dd63-442e-95ce-c796ab5bdcc1
resourceVersion: "46229"
selfLink: /api/v1/namespaces/default/pods/tomcat6-7b84fb5fdc-5jh6t
uid: 2f661212-3b03-47e4-bcb8-79782d5c7578
spec:
containers:
- image: tomcat:6.0.53-jre8
imagePullPolicy: IfNotPresent
name: tomcat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-bxqtw
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: k8s-node2
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-bxqtw
secret:
defaultMode: 420
secretName: default-token-bxqtw
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-05-04T03:50:47Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-05-04T03:50:49Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-05-04T03:50:49Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-05-04T03:50:47Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://18eb0798384ea44ff68712cda9be94b6fb96265206c554a15cee28c288879304
image: tomcat:6.0.53-jre8
imageID: docker-pullable://tomcat@sha256:8c643303012290f89c6f6852fa133b7c36ea6fbb8eb8b8c9588a432beb24dc5d
lastState: {}
name: tomcat
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2020-05-04T03:50:49Z"
hostIP: 10.0.2.4
phase: Running
podIP: 10.244.2.7
podIPs:
- ip: 10.244.2.7
qosClass: BestEffort
startTime: "2020-05-04T03:50:47Z"
前面我们通过命令行的方式,部署和暴露了tomcat,实际上也可以通过yaml的方式来完成这些操作。
#这些操作实际上是为了获取Deployment的yaml模板
[root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6-deployment.yaml
W0504 04:13:28.265432 24263 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
[root@k8s-node1 ~]# ls tomcat6-deployment.yaml
tomcat6-deployment.yaml
[root@k8s-node1 ~]#
修改“tomcat6-deployment.yaml”内容如下:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tomcat6
name: tomcat6
spec:
replicas: 3
selector:
matchLabels:
app: tomcat6
template:
metadata:
labels:
app: tomcat6
spec:
containers:
- image: tomcat:6.0.53-jre8
name: tomcat
#部署
[root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml
deployment.apps/tomcat6 configured
#查看资源
[root@k8s-node1 ~]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/tomcat6-7b84fb5fdc-5jh6t 1/1 Running 0 27m
pod/tomcat6-7b84fb5fdc-8lhwv 1/1 Running 0 27m
pod/tomcat6-7b84fb5fdc-j4qmh 1/1 Running 0 27m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 14h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tomcat6 3/3 3 3 27m
NAME DESIRED CURRENT READY AGE
replicaset.apps/tomcat6-7b84fb5fdc 3 3 3 27m
[root@k8s-node1 ~]#
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: tomcat6
name: tomcat6
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: tomcat6
type: NodePort
status:
loadBalancer: {}
将这段输出和“tomcat6-deployment.yaml”进行拼接,表示部署完毕并进行暴露服务:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tomcat6
name: tomcat6
spec:
replicas: 3
selector:
matchLabels:
app: tomcat6
template:
metadata:
labels:
app: tomcat6
spec:
containers:
- image: tomcat:6.0.53-jre8
name: tomcat
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: tomcat6
name: tomcat6
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: tomcat6
type: NodePort
部署并暴露服务
[root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml
deployment.apps/tomcat6 created
service/tomcat6 created
查看服务和部署信息
[root@k8s-node1 ~]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/tomcat6-7b84fb5fdc-dsqmb 1/1 Running 0 4s
pod/tomcat6-7b84fb5fdc-gbmxc 1/1 Running 0 5s
pod/tomcat6-7b84fb5fdc-kjlc6 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 14h
service/tomcat6 NodePort 10.96.147.210 80:30172/TCP 4s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tomcat6 3/3 3 3 5s
NAME DESIRED CURRENT READY AGE
replicaset.apps/tomcat6-7b84fb5fdc 3 3 3 5s
[root@k8s-node1 ~]#
访问node1,node1和node3的30172端口:
[root@k8s-node1 ~]# curl -I http://192.168.56.{100,101,102}:30172/
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Accept-Ranges: bytes
ETag: W/"7454-1491118183000"
Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT
Content-Type: text/html
Content-Length: 7454
Date: Mon, 04 May 2020 04:35:35 GMT
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Accept-Ranges: bytes
ETag: W/"7454-1491118183000"
Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT
Content-Type: text/html
Content-Length: 7454
Date: Mon, 04 May 2020 04:35:35 GMT
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Accept-Ranges: bytes
ETag: W/"7454-1491118183000"
Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT
Content-Type: text/html
Content-Length: 7454
Date: Mon, 04 May 2020 04:35:35 GMT
[root@k8s-node1 ~]#
Ingress
通过Ingress发现pod进行关联。基于域名访问
通过Ingress controller实现POD负载均衡
支持TCP/UDP 4层负载均衡和HTTP 7层负载均衡
步骤:
(1)部署Ingress controller
执行“k8s/ingress-controller.yaml”
[root@k8s-node1 k8s]# kubectl apply -f ingress-controller.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
daemonset.apps/nginx-ingress-controller created
service/ingress-nginx created
查看
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default tomcat6-7b84fb5fdc-dsqmb 1/1 Running 0 16m
default tomcat6-7b84fb5fdc-gbmxc 1/1 Running 0 16m
default tomcat6-7b84fb5fdc-kjlc6 1/1 Running 0 16m
ingress-nginx nginx-ingress-controller-9q6cs 0/1 ContainerCreating 0 40s
ingress-nginx nginx-ingress-controller-qx572 0/1 ContainerCreating 0 40s
kube-system coredns-546565776c-9sbmk 1/1 Running 1 14h
kube-system coredns-546565776c-t68mr 1/1 Running 1 14h
kube-system etcd-k8s-node1 1/1 Running 1 14h
kube-system kube-apiserver-k8s-node1 1/1 Running 1 14h
kube-system kube-controller-manager-k8s-node1 1/1 Running 1 14h
kube-system kube-flannel-ds-amd64-5xs5j 1/1 Running 2 13h
kube-system kube-flannel-ds-amd64-6xwth 1/1 Running 2 14h
kube-system kube-flannel-ds-amd64-fvnvx 1/1 Running 1 13h
kube-system kube-proxy-7tkvl 1/1 Running 1 13h
kube-system kube-proxy-mvlnk 1/1 Running 2 13h
kube-system kube-proxy-sz2vz 1/1 Running 1 14h
kube-system kube-scheduler-k8s-node1 1/1 Running 1 14h
[root@k8s-node1 k8s]#
这里master节点负责调度,具体执行交给node2和node3来完成,能够看到它们正在下载镜像
(2)创建Ingress规则
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web
spec:
rules:
- host: tomcat6.kubenetes.com
http:
paths:
- backend:
serviceName: tomcat6
servicePort: 80
[root@k8s-node1 k8s]# touch ingress-tomcat6.yaml
#将上面的规则,添加到ingress-tomcat6.yaml文件中
[root@k8s-node1 k8s]# vi ingress-tomcat6.yaml
[root@k8s-node1 k8s]# kubectl apply -f ingress-tomcat6.yaml
ingress.extensions/web created
[root@k8s-node1 k8s]#
修改本机的hosts文件,添加如下的域名转换规则:
192.168.56.102 tomcat6.kubenetes.com
测试: http://tomcat6.kubenetes.com/
并且集群中即便有一个节点不可用,也不影响整体的运行。
查看命令对应的yml,不会真正执行create deployment命令
[root@k8s-node1 k8s]# kubectl create deployment nginx --image=nginx --dry-run -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
[root@k8s-node1 k8s]# kubectl get pods -o yaml > temp.yml
[root@k8s-node1 k8s]# cat temp.yml
[root@k8s-node1 k8s]# vi temp.yml
[root@k8s-node1 k8s]# kubectl apply -f temp.yml