安装: 都需要编译或二进制安装, 证书\网络也需要手动安装
master: API Serveer, Scheduler, Controller-manage, etcd 都需要当守护进程启动,
node: kubelet, docker(或者说容器引擎), kube-proxy
k8s官方提供的一个集群部署工具, 每个节点都需要安装 docker-ce, kubelet 下载地址, 部署md
master: API Serveer, Scheduler, Controller-manage, etcd 运行在kubelet上, 运行为pod
node: node节点都需要安装 kubelet,docker, 其中kub-proxy 也会运行为pod
docker-ce: 容器组件 kubelet: 运行pod的核心组件
https://yum.kubernetes.io/
https://packages.cloud.google.com/yum/repos
1、基于主机名通信:/etc/hosts; 这个是必须的
2、时间同步; ntpdate pool.ntp.org
*/1 * * * * /usr/sbin/ntpdate pool.ntp.org &>/dev/null
3、关闭firewalld和iptables.service;
systemctl disable firewalld
OS:CentOS 7.5.1804
4、禁用swap swapoff -a
禁用 /etc/fstab # /dev/mapper/centos-swap swap swap defaults 0 0
1、etcd cluster,仅master节点;
2、flannel,集群的所有节点;
3、配置k8s的master:仅master节点;
kubernetes-master
启动的服务:
kube-apiserver, kube-scheduler, kube-controller-manager
4、配置k8s的各Node节点;
kubernetes-node
先设定启动docker服务;
启动的k8s的服务:
kube-proxy, kubelet
1、master, nodes: 安装kubelet, kubeadm, docker
2、master: kubeadm init 集群初始化
3、nodes: kubeadm join 基于token认证加入到集群中
部署文档: https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md
4、安装包说明
无论是master和node安装都是server端, 连接至server端才安装客户端
5、如果用kubadm安装只需要安装
kubeadm, docker-ce, kubelet, flannel
6、安装kubadm只能是在线安装用
阿里镜像源:https://developer.aliyun.com/mirror/kubernetes
master, etcd:192.168.9.30
node1:192.168.9.31
node2:192.168.9.32
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安装Docker-CE
sudo yum makecache fast
master
# yum -y install docker-ce kubelet kubeadm kubectl
当前版本
docker-ce.x86_64 3:19.03.8-3.el7
kubeadm.x86_64 0:1.18.0-0
kubectl.x86_64 0:1.18.0-0
kubelet.x86_64 0:1.18.0-0
node
# 如果不用客户端工具 可以不用安装kubectl
# yum -y install docker-ce kubelet kubeadm kubectl
若要通过默认的k8s.gcr.io镜像仓库获取Kubernetes系统组件的相关镜像,需要配置docker UnitFile(/usr/lib/systemd/system/docker.service文件)中的Environment变量,为其定义合用的HTTPS_PROXY,格式如下:
[Service]
# 访问https时通过这个代理访问并加载这个镜像文件,
# Environment="HTTPS_PROXY=http://ik8s.io:10070" 不要这个,后续让它默认加载阿里云镜像,
# 与本地主机不需要代理,在ExecStart之前写入配置
Environment="NO_PROXY=192.168.9.0/24,127.0.0.0/8"
另外,docker自1.13版起会自动设置iptables的FORWARD默认策略为DROP,这可能会影响Kubernetes集群依赖的报文转发功能,因此,需要在docker服务启动后,重新将FORWARD链的默认策略设备为ACCEPT,方式是修改/usr/lib/systemd/system/docker.service文件,
在“ExecStart=/usr/bin/dockerd”一行之后新增一行如下内容:
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
启动docker
# 初始化说明
# https://kubernetes.io/docs/setup/production-environment/container-runtimes/
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
~]# systemctl daemon‐reload
~]# systemctl start docker.service
~]# systemctl enable docker # 设置开机自启
~]# docker info
HTTPS Proxy: http://ik8s.io:10070
No Proxy: 192.168.100.0/24,127.0.0.0/8
解决docker info警告问题
~]# cat > /etc/sysctl.d/99-kubernetes-cri.conf <
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
~]# sysctl --system
~]# cat /etc/sysconfig/kubelet # 添加
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
~]# rpm -ql kubelet # kubelet安装之后的软件
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service
# 设置忽略swap启用的错误, /etc/sysconfig/kubelet, 内容如下
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
KUBE_PROXY_MODE=ipvs # 使其支持ipvs规则
# 开机时需要手动装载至内核
ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh, nf_conntrack_ipv4
# 头次配置所有节点都只需要开机自启无需start
~]# systemctl enable kubelet
开启k8s ipvs
# 开机时需要手动装载至内核
ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh, nf_conntrack_ipv4
]# kubectl get configmap -n kube-system
~]# kubectl get configmaps -n kube-system -o yaml kube-proxy
apiVersion: v1 # 不配置默认就是mode
data:
config.conf: |-
iptables: xxxx
ipvs: xxxx
mode: ""
]# 将Mode修改为ipvs, 并重新加载k8s就能将k8s默认的iptables修改为ipvs规则了
kubeadm init 参数说明
参数 | 说明 | 说明 |
---|---|---|
–apiserver-advertise-address | 地址 | 监听的本地地址可忽略 |
–apiserver-bind-port | 默认: 6443 | 指定api绑定的端口 |
–image-repository | registry.aliyuncs.com/google_containers | 指定下载镜像 |
–ignore-preflight-errors | 如 =Swap | 忽略错误swap错误 |
–kubernetes-version | 如v1.16.3 | 初始化时指定k8s的版本号 |
–service-cidr | 默认:10.96.0.0/12 | 指定集群虚拟网络 |
–pod-network-cidr | 默认:10.244.0.0/16 | 指定pod容器网络 |
若未禁用Swap设备,则需要编辑kubelet的配置文件/etc/sysconfig/kubelet
,设置其忽略Swap启用的状态错误,内容如下: KUBELET_EXTRA_ARGS="‐‐fail‐swap‐on=false"
~]# kubeadm init --kubernetes-version=v1.18.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
# 使用阿里云镜像, 但需要在docker.service中注释Environment="HTTPS_PROXY=xx"
# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.16.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
初始化操作主要经历了下面15个步骤,每个阶段均输出均使用[步骤名称]作为开头
成功执行之后,你会看到下面的输出
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.9.30:6443 --token lwnslb.ks2sx9uulokawwzg \
--discovery-token-ca-cert-hash sha256:2960d2259c63f3422dcf8958279b85fb74050a201a879924cd6eaeeb697bc1d6
$HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xx=
server: https://192.168.9.30:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: xxx=
client-key-data: xxx==
查看下载的镜像文件
~]# docker images
REPOSITORY TAG SIZE
xx/kube-proxy v1.16.3 86.1MB #
xx/kube-apiserver v1.16.3 217MB # apiserver
xx/kube-controller-manager v1.16.3 163MB # 控制器
xx/kube-scheduler v1.16.3 87.3MB # 调度器
xx/etcd 3.3.15-0 247MB # 配置相关
xx/coredns 1.6.2 44.1MB # dns
xx/pause 3.1 742kB # 基础镜像
如果执行失败,那意味着之前的操作存在问题,检查顺序如下:
基础环境
主机名是否可以解析,SELinux,iptables是否关闭。
交换分区是否存在free -m查看
内核参数是否修改、IPVS是否修改(目前阶段不会造成失败)
基础软件
Docker是否安装并启动
Kubelet是否安装并启动
执行kubeadm是否有别的报错是否忽略
systemctl status kubelet查看kubelet是否启动
如果kubelet无法启动,查看日志有什么报错,并解决报错。
以上都解决完毕,需要重新初始化
**kubeadm reset 进行重置(**生产千万不要执行,会直接删除集群)
根据kubeadm reset 提升,清楚iptables和LVS。
查看节点状态
~]# kubectl get nodes # 查看集群节点
NAME STATUS ROLES AGE VERSION
docker-master NotReady master 61m v1.16.3
基础网络组件 github地址
# 安装flannel清单, 所有节点都需要安装
For Kubernetes v1.7+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 有时候会出现 The connection to the server raw.githubusercontent.com was refused,可能是地址被污染了可以在 /etc/hosts中 来源:https://www.jianshu.com/p/5c1a352ba242
~]# cat /etc/hosts # 添加解析记录在进行下载
199.232.28.133 raw.githubusercontent.com
~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
# 查看docker images
quay.io/coreos/flannel v0.11.0-amd64
# 检查 nodes
~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 12m v1.16.3
~]# kubectl get cs
NAME AGE
scheduler <unknown>
controller-manager <unknown>
etcd-0 <unknown>
#从1.16开始就显示为unknow 具体原因:https://segmentfault.com/a/1190000020912684
# 临时解决方案
~]# kubectl get cs -o=go-template='{{printf "|NAME|STATUS|MESSAGE|\n"}}{{range .items}}{{$name := .metadata.name}}{{range .conditions}}{{printf "|%s|%s|%s|\n" $name .status .message}}{{end}}{{end}}'
|NAME|STATUS|MESSAGE|
|controller-manager|True|ok|
|scheduler|True|ok|
|etcd-0|True|{"health":"true"}|
~]# kubectl get ns # 查看k8s 名称空间
NAME STATUS AGE
default Active 71m
kube-node-lease Active 71m
kube-public Active 71m
kube-system Active 71m
~]# kubectl get pods -n kube-system # 查看系统运行的pod
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-lskfh 1/1 Running 0 71m
coredns-58cc8c89f4-xx5gj 1/1 Running 0 71m
etcd-docker-master 1/1 Running 0 70m
kube-apiserver-docker-master 1/1 Running 0 70m
kube-controller-manager-docker-master 1/1 Running 0 70m
kube-flannel-ds-amd64-5p7gs 1/1 Running 0 6m59s
kube-proxy-lbvkn 1/1 Running 0 71m
kube-scheduler-docker-master 1/1 Running 0 70m
~]# kubectl get deployment # 查看运行容器的使用情况
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 7m20s
~]# kubeadm join 192.168.9.30:6443 --token lwnslb.ks2sx9uulokawwzg \
> --discovery-token-ca-cert-hash sha256:2960d2259c63f3422dcf8958279b85fb74050a201a879924cd6eaeeb697bc1d6
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver isthe guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Lates
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-sys
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
~]# docker images # 加入集群后会自动下载
REPOSITORY TAG registry.aliyuncs.com/google_containers/kube-proxy v1.16.3
quay.io/coreos/flannel v0.11.0-amd64
registry.aliyuncs.com/google_containers/pause 3.1
~]# kubectl get nodes # 如果roles为none但的确加入集群了是因为没有加入flannel网络
NAME STATUS ROLES AGE VERSION
master Ready master 30m v1.16.3
node1 Ready <none> 12s v1.16.3
node2 Ready <none> 14s v1.16.3
# CrashLoopBackOff 如果有这个backoff那就多等会, 可能是因为网络问题flannel没下载
~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS IP NODE
coredns-58cc8c89f4-dtfqg 1/1 Running 0 10.244.0.3 master
coredns-58cc8c89f4-vj6px 1/1 Running 0 10.244.0.2 master
etcd-master 1/1 Running 0 192.168.9.30 master
kube-apiserver-master 1/1 Running 0 192.168.9.30 master
kube-controller-manager-master 1/1 Running 0 192.168.9.30 master
kube-flannel-ds-amd64-8b2m8 1/1 Running 0 192.168.9.31 node1
kube-flannel-ds-amd64-xgrqw 1/1 Running 0 192.168.9.30 master
kube-flannel-ds-amd64-xh8mw 1/1 Running 0 192.168.9.32 node2
kube-proxy-f8f7w 1/1 Running 0 192.168.9.32 node2
kube-proxy-qs98x 1/1 Running 0 192.168.9.30 master
kube-proxy-v6b7x 1/1 Running 0 192.168.9.31 node1
kube-scheduler-master 1/1 Running 0 192.168.9.30 master
~]# kubectl describe node slave1 # 查看节点的状态信息
命令 | 说明 |
---|---|
kubectl get pod | 查看所有的pod信息 |
kubectl get pod -o wide | 查看更详细的pod状态说明 |
kubectl get pod --show-labels | 查看pod的标签属性 |
kubectl get deployment | 查看pod的控制器状态 |
kubectl describe node name | 查看该节点的状态详细说明 |
kubectl version | 查看k8s版本 |
kubectl cluster-info | 查看k8s集群 |
查看名称空间
~]# kubectl get namespaces
NAME STATUS AGE
default Active 108m # 没有指明名称空间下都在default
kube-node-lease Active 108m
kube-public Active 108m # 公共的,任何人都可以访问的
kube-system Active 108m # 系统级的Pod都运行在这个空间下
创建名称空间
]# kubectl api-resources # 查看可以创建的资源
]# kubectl create namespace qa # 创建一个名称空间资源
]# kubectl delete namespace qa # 删除一个名称空间
]# kubectl get namespaces default -o json # 查看名称空间的信息 json格式
]# kubectl describe namespaces default # 查看名称空间的描述
# 运行名称为nginx 镜像来源于 nginx --replicas定义使用几个副本
~]# kubectl run nginx --image=nginx:latest --replicas=1
# 1.18版本需要直接创建
~]# kubectl create deployment nginx-deploy --image=nginx
~]# kubectl get pods
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deploy-d4789f999-nmp4m 1/1 Running 0 18m 10.244.2.4 slave2
~]# kubectl get deployment nginx-deploy -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deploy 1/1 1 1 18m nginx nginx app=nginx-deploy
# 名称 准备就绪1个 最新1个 活跃1个 活跃时长 容器名称 镜像nginx 控制器选择标签
# 查看pod的创建详细信息
~]# kubectl describe pod nginx-deploy-d4789f999-nmp4m
# 查看控制器创建详细信息
~]# kubectl describe deployment nginx-deploy
Name: nginx-deploy
Namespace: default
CreationTimestamp: Wed, 08 Apr 2020 11:53:57 +0800
Labels: app=nginx-deploy
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=nginx-deploy
Replicas: 1 desired | 1 updated | 1 总 | 1 活 | 0 不活跃
StrategyType: RollingUpdate
pods的客户端有两类
# 当直接删除一个容器时,控制器会在重新build一个pod
~]# kubectl delete pod nginx-6db489d4b7-kqd87
pod "nginx-6db489d4b7-kqd87" deleted
~]# kubectl get pods # 如果状态一直不成功 说明下载镜像可能出现了问题
NAME READY STATUS RESTARTS AGE
nginx-6db489d4b7-sghbn 1/1 Running 0 2m34s
# 彻底删除不在恢复
~]# kubectl delete deployment iapp
deployment.apps "iapp" deleted
类型(type) | 说明 |
---|---|
ClusterIP | service只有一个service ip,只能在集群内被pod各客户端所访问, 而不能突破集群边境被集群外的客户端所访问, 默认类型 |
NodePort | 集群外节点访问,30030-31325 |
LoadBalancer | |
ExternalName |
# 使用语法: kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP][--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type][options]e
# 将这个控制器的pod资源创建为一个服务, 添加端点
~]# kubectl expose deployment nginx --name nginx --port=80 --protocol=TCP --target-port=80
service/nginx exposed
~]# kubectl get services #查看创建的 expose 固定访问端点地址 简写: services = svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.97.222.101 <none> 80/TCP 19s
# 注意: 此时地址默认为 ClusterIP类型, 还是只能在集群内访问
# 此时架构为: service_ip:service:port 代理nginx_pod服务 pod_ip:pod_port
# 删除端点
~]# kubectl delete svc nginx
通过coreDNS解析nginx,以便直接通过nginx访问expose服务
~]# kubectl get svc -o wide -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4h57m k8s-app=kube-dns
创建客户端
# 创建一个pod客户端, 测试expose端点服务以及coreDns
~]# kubectl run client --image=busybox --rm -it
/ # cat /etc/resolv.conf
nameserver 10.96.0.10 # dns地址为 coreDns
# 搜索域,如果需要在节点内解析nginx那就需要加上 nginx.default.svc.cluster.local
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
访问nginx
/ # wget nginx # 此时会自动解析成为 expose地址
Connecting to nginx (10.97.222.101:80)
~]# kubectl delete pods nginx-6db489d4b7-sghbn
pod "nginx-6db489d4b7-sghbn" deleted
# 此时在查看pods 已经成 node1 切换为 node2
# 通过标签或标签选择器关连pod资源, 而不用基于地址来选择, 无论后台怎么变化它只要属于deployment,那就全部会纳入到资源中, 通过端点就无需在关注pods了
# 在到client中访问依旧没问题
/ # wget -O - -q http://nginx
<title>Welcome to nginx!</title>
# 无论 pods运行在哪个节点, expose 服务通过iptables或ipvs规则把所有访问这个pods的资源通过label| select 都调度至后端的pods
~]# kubectl describe service nginx
Name: nginx
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx # 2、只要发现pod有这个标签就会立即加入到这个池中
Type: ClusterIP
IP: 10.97.222.101 # 3、expose访问地址
Port: <unset> 80/TCP # 3、访问端口
TargetPort: 80/TCP # 4、pod_target端口
Endpoints: 10.244.2.3:80 # 1、如果删除pods这个地址会立即发生改变
Session Affinity: None
Events: <none>
# 查看labels, 当这个标签删除,只要有新的生成,携带该标签就会立即加入到 expose 池中
~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-6db489d4b7-qqhqs 1/1 Running 0 14m pod-template-hash=6db489d4b7,run=nginx
kubectl run
# 生成五个pods, 控制器名 iapp
~]# kubectl run iapp --image=ikubernetes/myapp:v1 --replicas=5
~]# kubectl get pods -o wide # 确保全部为running状态
NAME READY STATUS RESTARTS AGE IP NODE
iapp-7665f799d7-5mwrx 1/1 Running 0 71s 10.244.2.9 node2
iapp-7665f799d7-gmvm9 1/1 Running 0 71s 10.244.1.9 node1
iapp-7665f799d7-lr44m 1/1 Running 0 71s 10.244.2.8 node2
iapp-7665f799d7-q4fck 1/1 Running 0 71s 10.244.1.8 node1
iapp-7665f799d7-vhttb 1/1 Running 0 71s 10.244.1.10 node1
kubectl exposed
~]# kubectl expose deployment iapp --name=iapp --port=80 --target-port=80
service/iapp exposed
~]# kubectl get svc # 确保端点正常
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
iapp ClusterIP 10.102.128.15 <none> 80/TCP 12s
~]# kubectl describe svc iapp
Name: iapp
Namespace: default
Labels: run=iapp
Annotations: <none>
Selector: run=iapp # 标签为 run=iapp
Type: ClusterIP
IP: 10.102.128.15
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.10:80,10.244.1.8:80,10.244.1.9:80 + 2 more...
# wget iapp # 还是busybox的客户端, 先检查是否能解析
Connecting to iapp (10.102.128.15:80)
# 访问iapp端点的确是通过iptables策略轮询访问
/ # while true;do wget -O - -q http://iapp/hostname.html; sleep 1; done
iapp-7665f799d7-q4fck
iapp-7665f799d7-5mwrx
iapp-7665f799d7-lr44m
iapp-7665f799d7-gmvm9
kubectl scale
# current-replicas 当前有多少pod可忽略, 直接修改即可
# If the deployment named mysql's current size is 2, scale mysql to 3.
# kubectl scale --current-replicas=2 --replicas=3 deployment/mysql
~]# kubectl scale --replicas=3 deployment iapp
deployment.apps/iapp scaled # 再次查看就只有三个了
增加pods kubectl scale --replicas=5 deployment iapp
升级版本 关键字: 服务修改:kubectl set image 状态查看:kubectl rollout
# 1、查看某一台的pods详细信息
~]# kubectl describe pods iapp-7665f799d7-gmvm9
Name: iapp-7665f799d7-gmvm9
Namespace: default
Priority: 0
Node: node1/192.168.9.31
Start Time: Thu, 28 Nov 2019 16:49:18 +0800
Labels: pod-template-hash=7665f799d7
run=iapp
Annotations: <none>
Status: Running
IP: 10.244.1.9
IPs:
IP: 10.244.1.9
Containers:
iapp:
Container ID: docker://idididididd
Image: ikubernetes/myapp:v1 # 2、当前版本为v1
# 语法: set image 控制器 名称 iapp是 containers的iapp名称=这里是镜像
~]# kubectl set image deployment iapp iapp=ikubernetes/myapp:v2
deployment.apps/iapp image updated
# 检查正在rollout的状态
~]# kubectl rollout status deployment iapp
Waiting for deployment "iapp" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "iapp" rollout to finish: 4 of 5 updated replicas are available...
deployment "iapp" successfully rolled out
# 再次检查pods 会发现name已全部更换
~]# kubectl describe pods iapp-7bf86c5d78-mj7hq
Node: node2/192.168.9.32
Labels: pod-template-hash=7bf86c5d78
run=iapp
Containers:
iapp:
Container ID: docker://ididididididid
Image: ikubernetes/myapp:v2
回滚
修改版本回归 kubectl set image deployment iapp iapp=ikubernets/myapp:v1
撤销上一次的rollout
~]# kubectl rollout undo deployment iapp
deployment.apps/iapp rolled back
查看历史记录 kubectl rollout history deployment iapp
修改服务端点
~]# kubectl edit svc iapp
service/iapp edited
# 将类型修改为: type: NodePort
# 检查k8s服务状态
~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
iapp NodePort 10.102.128.15 <none> 80:30219/TCP 24m
# 修改为: NodePort 就能在外网通过集群主机端口访问了
# http://192.168.9.30:30219/ 集群地址30219
API Server对于Kubernetes至关重要,你的所有操作,都只能且必须经过API Server,所有其他组件包括控制器管理器、Scheduler都必须只能与API Server进行交互,所以API Server是整个Kubernetes,也只有API Server才能去操作etcd中的数据,kubelet或kube-proxy都需要与API Server进行交互,他们也都是API Server的客户端,他们都必须认证到API Server中才能进行操作,API Server也是唯一一个etcd的客户端,也就意味着能管理或操作etcd集群的也只有API Server,所以etcd被称为cluster store整个集群的所有资源的状态定义的信息都保存在etcd当中,所以etcd如果崩了,那么整个集群的过去的运行记录都没了;
所以作为运维工程师来讲,我们应当定期去备份etcd当中的数据,并且为了保障etcd的可用,应该把etcd做成高可用的,API Server自身作为整个集群的网关,我们与API Server打交道的时候可以使用客户端命令工具kubectl,以后还有一个dashboard可以实现图形化操作,Kubernetes的API是一个http的API, 默认使用json格式的数据,我们在使用http协议提交配置的时候,也是使用的json,即使我们提交的是一个yaml,它还是会自动转化为json,所以唯一接收的序列化方案是json;
API Server是一个resetfull风格的API,在resetfull风格中,reset叫做表征状态转移,它是一种架构范式,是一种能让分布式组件之间能够互相调用的一种范式;
对于我们的Kubernetes来讲,不论有多少种不同类型的资源,最后事实上他们都应该位于整个API Server的某个API版本下,比如Service改变了,如果大家都依赖于这个Service那么整个API Server所有版本都得变更,所以Kubernetes把它的API Server把它的API分解到了不同的群组当中,其中每个组合组合通常都是一组相关的类型,我们称为API Group群组,这些群组可以使用kubectl api-versions来查看,v1是核心群组,并且多版本可以并存,所以你的每一个资源一定是某个群组下的一组运行的属性赋值,不同的版本相同的群组他的属性是不一样的,所以假如你在v1下面定义一个资源在v1beta1下面未必有用;
kubectl get deployments.apps myweb -o json该命令执行后可以看到apiVersion是apps/v1
etcd被称为cluster store整个集群的所有资源的状态定义的信息都保存在etcd当中,所以etcd如果崩了,那么整个集群的过去的运行记录都没了;
所以作为运维工程师来讲,我们应当定期去备份etcd当中的数据,并且为了保障etcd的可用,应该把etcd做成高可用的,API Server自身作为整个集群的网关,我们与API Server打交道的时候可以使用客户端命令工具kubectl,以后还有一个dashboard可以实现图形化操作,Kubernetes的API是一个http的API, 默认使用json格式的数据,我们在使用http协议提交配置的时候,也是使用的json,即使我们提交的是一个yaml,它还是会自动转化为json,所以唯一接收的序列化方案是json;
API Server是一个resetfull风格的API,在resetfull风格中,reset叫做表征状态转移,它是一种架构范式,是一种能让分布式组件之间能够互相调用的一种范式;
对于我们的Kubernetes来讲,不论有多少种不同类型的资源,最后事实上他们都应该位于整个API Server的某个API版本下,比如Service改变了,如果大家都依赖于这个Service那么整个API Server所有版本都得变更,所以Kubernetes把它的API Server把它的API分解到了不同的群组当中,其中每个组合组合通常都是一组相关的类型,我们称为API Group群组,这些群组可以使用kubectl api-versions来查看,v1是核心群组,并且多版本可以并存,所以你的每一个资源一定是某个群组下的一组运行的属性赋值,不同的版本相同的群组他的属性是不一样的,所以假如你在v1下面定义一个资源在v1beta1下面未必有用;
kubectl get deployments.apps myweb -o json该命令执行后可以看到apiVersion是apps/v1