使用以下命令提供有关Kubernetes集群当前状态信息,包括API服务器地址、集群状态等信:
[root@k8s-master ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.2.101:6443
CoreDNS is running at https://192.168.2.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
使用以下命令显示系统上当前安装的 kubectl 版本,以及它连接到的 Kubernetes 集群的版本:
[root@k8s-master ~]# kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.4", GitCommit:"fa3d7990104d7c1f16943a67f11b154b71f6a132", GitTreeState:"clean", BuildDate:"2023-07-19T12:20:54Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.4", GitCommit:"fa3d7990104d7c1f16943a67f11b154b71f6a132", GitTreeState:"clean", BuildDate:"2023-07-19T12:14:49Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"}
此命令将提供 Kubernetes 集群中可用资源的列表,常用资源列表如下:
使用以下命令查看当前default命名空间的所有可用资源:
[root@k8s-master ~]# kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39h
使用以下命令查看当前default命名空间的deployment资源:
[root@k8s-master ~]# kubectl get deployment
No resources found in default namespace.
指定特定的命名空间:(-n 参数是指定特定命名空间 -namespace 的缩写形式):
[root@k8s-master ~]# kubectl get deployments -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
calico-kube-controllers 1/1 1 1 36h
coredns 2/2 2 2 39h
查看指定资源的更多详细信息:(-o参数是查看更多详细信息):
[root@k8s-master ~]# kubectl get deployments -n kube-system -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
calico-kube-controllers 1/1 1 1 36h calico-kube-controllers docker.io/calico/kube-controllers:v3.25.0 k8s-app=calico-kube-controllers
coredns 2/2 2 2 39h coredns registry.aliyuncs.com/google_containers/coredns:v1.10.1 k8s-app=kube-dns
Kubernetes 命令 kubectl create 用于向集群添加新资源。用户可以使用此命令创建 Pod、Service和Deployment等资源。
使用以下命令,将使用 nginx 映像创建一个名为my-nginx 的新deployment:
[root@k8s-master ~]# kubectl create deployment my-nginx --image=nginx
deployment.apps/my-nginx created
使用以下命令创建新的 cronjob 的另一个示例:
kubectl create job my-cronjob --schedule="*/5 * * * *" --image=busybox -- command -- args="echo This is a cron job!"
命令参数:
–schedule 指定cron语法中任务计划
–image 指定运行容器镜像
–command 执行容器运行的命令
kubectl edit 命令,可以编辑集群中的现有资源对象。您可以使用 kubectl edit 直接修改资源的配置,这样您就无需手动生成新的 YAML 文件。以下命令修改名为my-nginx的deployment的资源配置:
[root@k8s-master ~]# kubectl edit deployments my-nginx
Edit cancelled, no changes made.
Kubectl delete 命令将帮助您删除 Kubernetes 集群中的任何资源,例如 pod、deployment、service、cornjob等资源。使用以下命令删除名为my-nginx的deplyment资源:
[root@k8s-master ~]# kubectl delete deployments.apps my-nginx
deployment.apps "my-nginx" deleted
kubectl apply命令使您能够通过YAML编排文件,在集群中创建或修改资源:
kubectl apply -f deployment.yaml
在 Kubernetes 中,命令 kubectl config 允许您管理 kubectl 客户端的配置。config 命令可用于查看、编辑或在多个集群配置之间切换,以及管理用户凭据和上下文设置:
kubectl config set-context --current --namespace=NAMESPACE
kubectl config set-context 是 Kubernetes 中的一个命令,允许您修改 kubectl 配置的上下文。上下文定义了 kubectl 命令操作的集群、用户和命名空间。在此示例中,此命令将当前命名空间设置为“NAMESPACE”。
kubectl describe 提供了一种快速方法来获取有关资源的全面信息,从而更轻松地了解资源的当前状态并发现任何问题。它显示有关资源状态、事件和元数据的详细信 使用以下命令可以查看Pod名称为calico-kube-controllers-6c99c8747f-5z2wb的详细信息:
[root@k8s-master ~]# kubectl describe -n kube-system pod calico-kube-controllers-6c99c8747f-5z2wb
Name: calico-kube-controllers-6c99c8747f-5z2wb
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Service Account: calico-kube-controllers
Node: k8s-master/192.168.2.101
Start Time: Sat, 22 Jul 2023 20:57:05 +0800
Labels: k8s-app=calico-kube-controllers
pod-template-hash=6c99c8747f
Annotations: cni.projectcalico.org/containerID: 442ddbb1e52ea2b507153ef1a4e35f7d93a22096080da733d8edd429c9a95418
cni.projectcalico.org/podIP: 10.244.235.193/32
cni.projectcalico.org/podIPs: 10.244.235.193/32
Status: Running
IP: 10.244.235.193
IPs:
IP: 10.244.235.193
Controlled By: ReplicaSet/calico-kube-controllers-6c99c8747f
Containers:
calico-kube-controllers:
Container ID: containerd://406b794c506a3c8785eb54f003a4e957e6b44b6b7a79700fa600189adc3805b7
Image: docker.io/calico/kube-controllers:v3.25.0
Image ID: docker.io/calico/kube-controllers@sha256:c45af3a9692d87a527451cf544557138fedf86f92b6e39bf2003e2fdb848dce3
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 24 Jul 2023 09:34:23 +0800
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Mon, 24 Jul 2023 09:32:46 +0800
Finished: Mon, 24 Jul 2023 09:32:57 +0800
Ready: True
Restart Count: 6
Liveness: exec [/usr/bin/check-status -l] delay=10s timeout=10s period=10s #success=1 #failure=6
Readiness: exec [/usr/bin/check-status -r] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
ENABLED_CONTROLLERS: node
DATASTORE_TYPE: kubernetes
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x4h8d (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-x4h8d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 31m (x2 over 33h) kubelet Readiness probe failed: Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded
Warning Unhealthy 31m (x2 over 33h) kubelet Liveness probe failed: Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded
Warning Unhealthy 30m (x2 over 30m) kubelet Liveness probe failed: Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded; Error reaching apiserver: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded with http status code: 0
Warning Unhealthy 30m (x3 over 30m) kubelet Readiness probe failed: Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded; Error reaching apiserver: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded with http status code: 0
Warning Unhealthy 30m kubelet Readiness probe failed: Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": http2: client connection lost; Error reaching apiserver: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded with http status code: 0
Warning Unhealthy 30m kubelet Liveness probe failed: Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": http2: client connection lost; Error reaching apiserver: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded with http status code: 0
Warning FailedMount 30m kubelet MountVolume.SetUp failed for volume "kube-api-access-x4h8d" : failed to fetch token: Post "https://192.168.2.101:6443/api/v1/namespaces/kube-system/serviceaccounts/calico-kube-controllers/token": http2: client connection lost
Warning Unhealthy 30m kubelet Readiness probe failed: Error reaching apiserver: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host with http status code: 0; Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host
Normal Pulled 30m (x2 over 21h) kubelet Container image "docker.io/calico/kube-controllers:v3.25.0" already present on machine
Normal Killing 30m (x2 over 21h) kubelet Container calico-kube-controllers failed liveness probe, will be restarted
Warning Unhealthy 30m (x6 over 32h) kubelet Readiness probe failed: Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host; Error reaching apiserver: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host with http status code: 0
Warning Unhealthy 30m (x5 over 32h) kubelet Liveness probe failed: Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host; Error reaching apiserver: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host with http status code: 0
Normal Created 30m (x3 over 36h) kubelet Created container calico-kube-controllers
Normal Started 30m (x3 over 36h) kubelet Started container calico-kube-controllers
Warning FailedMount 30m (x5 over 30m) kubelet MountVolume.SetUp failed for volume "kube-api-access-x4h8d" : failed to fetch token: Post "https://192.168.2.101:6443/api/v1/namespaces/kube-system/serviceaccounts/calico-kube-controllers/token": dial tcp 192.168.2.101:6443: connect: network is unreachable
Warning Unhealthy 25m (x2 over 25m) kubelet (combined from similar events): Readiness probe failed: initialized to false
kubectl logs 获取 pod 中容器的日志,可用于跟踪容器的问题或解决容器的问题。执行以下命令可以查看Pod名称为calico-kube-controllers-6c99c8747f-5z2wb的日志:
[root@k8s-master ~]# kubectl logs -n kube-system calico-kube-controllers-6c99c8747f-5z2wb
2023-07-24 01:34:23.838 [INFO][1] main.go 107: Loaded configuration from environment config=&config.Config{LogLevel:"info", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", DatastoreType:"kubernetes"}
W0724 01:34:23.841873 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2023-07-24 01:34:23.842 [INFO][1] main.go 131: Ensuring Calico datastore is initialized
2023-07-24 01:34:23.853 [INFO][1] main.go 157: Calico datastore is initialized
2023-07-24 01:34:23.854 [INFO][1] main.go 194: Getting initial config snapshot from datastore
2023-07-24 01:34:23.878 [INFO][1] main.go 197: Got initial config snapshot
2023-07-24 01:34:23.879 [INFO][1] watchersyncer.go 89: Start called
2023-07-24 01:34:23.879 [INFO][1] main.go 211: Starting status report routine
2023-07-24 01:34:23.880 [INFO][1] resources.go 350: Main client watcher loop
2023-07-24 01:34:23.880 [INFO][1] main.go 220: Starting Prometheus metrics server on port 9094
2023-07-24 01:34:23.880 [INFO][1] main.go 503: Starting informer informer=&cache.sharedIndexInformer{indexer:(*cache.cache)(0xc000288348), controller:cache.Controller(nil), processor:(*cache.sharedProcessor)(0xc00081a7e0), cacheMutationDetector:cache.dummyMutationDetector{}, listerWatcher:(*cache.ListWatch)(0xc000288330), objectType:(*v1.Pod)(0xc000100c00), resyncCheckPeriod:0, defaultEventHandlerResyncPeriod:0, clock:(*clock.RealClock)(0x3029630), started:false, stopped:false, startedLock:sync.Mutex{state:0, sema:0x0}, blockDeltas:sync.Mutex{state:0, sema:0x0}, watchErrorHandler:(cache.WatchErrorHandler)(nil), transform:(cache.TransformFunc)(nil)}
2023-07-24 01:34:23.880 [INFO][1] main.go 503: Starting informer informer=&cache.sharedIndexInformer{indexer:(*cache.cache)(0xc000288390), controller:cache.Controller(nil), processor:(*cache.sharedProcessor)(0xc00081a850), cacheMutationDetector:cache.dummyMutationDetector{}, listerWatcher:(*cache.ListWatch)(0xc000288378), objectType:(*v1.Node)(0xc000324600), resyncCheckPeriod:0, defaultEventHandlerResyncPeriod:0, clock:(*clock.RealClock)(0x3029630), started:false, stopped:false, startedLock:sync.Mutex{state:0, sema:0x0}, blockDeltas:sync.Mutex{state:0, sema:0x0}, watchErrorHandler:(cache.WatchErrorHandler)(nil), transform:(cache.TransformFunc)(nil)}
2023-07-24 01:34:23.882 [INFO][1] main.go 509: Starting controller ControllerType="Node"
2023-07-24 01:34:23.882 [INFO][1] controller.go 193: Starting Node controller
I0724 01:34:23.882312 1 shared_informer.go:255] Waiting for caches to sync for nodes
2023-07-24 01:34:23.882 [INFO][1] watchersyncer.go 130: Sending status update Status=wait-for-ready
2023-07-24 01:34:23.888 [INFO][1] syncer.go 86: Node controller syncer status updated: wait-for-ready
2023-07-24 01:34:23.888 [INFO][1] watchersyncer.go 149: Starting main event processing loop
2023-07-24 01:34:23.888 [INFO][1] watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/ippools"
2023-07-24 01:34:23.888 [INFO][1] watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/nodes"
2023-07-24 01:34:23.888 [INFO][1] watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/clusterinformations"
2023-07-24 01:34:23.889 [INFO][1] watchercache.go 181: Full resync is required ListRoot="/calico/ipam/v2/assignment/"
2023-07-24 01:34:23.896 [INFO][1] watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/nodes"
2023-07-24 01:34:23.897 [INFO][1] watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/clusterinformations"
2023-07-24 01:34:23.897 [INFO][1] watchersyncer.go 130: Sending status update Status=resync
2023-07-24 01:34:23.897 [INFO][1] syncer.go 86: Node controller syncer status updated: resync
2023-07-24 01:34:23.897 [INFO][1] watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-07-24 01:34:23.897 [INFO][1] watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-07-24 01:34:23.898 [INFO][1] watchercache.go 294: Sending synced update ListRoot="/calico/ipam/v2/assignment/"
2023-07-24 01:34:23.898 [INFO][1] watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-07-24 01:34:23.903 [INFO][1] watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/ippools"
2023-07-24 01:34:23.903 [INFO][1] watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-07-24 01:34:23.903 [INFO][1] watchersyncer.go 221: All watchers have sync'd data - sending data and final sync
2023-07-24 01:34:23.904 [INFO][1] watchersyncer.go 130: Sending status update Status=in-sync
2023-07-24 01:34:23.904 [INFO][1] syncer.go 86: Node controller syncer status updated: in-sync
2023-07-24 01:34:23.912 [INFO][1] hostendpoints.go 173: successfully synced all hostendpoints
I0724 01:34:23.983479 1 shared_informer.go:262] Caches are synced for nodes
I0724 01:34:23.983505 1 shared_informer.go:255] Waiting for caches to sync for pods
I0724 01:34:23.983555 1 shared_informer.go:262] Caches are synced for pods
2023-07-24 01:34:23.983 [INFO][1] ipam.go 253: Will run periodic IPAM sync every 7m30s
2023-07-24 01:34:23.985 [INFO][1] ipam.go 331: Syncer is InSync, kicking sync channel status=in-sync
kubectl exec 在 pod 的正在运行的容器中执行命令。它对于调试、故障排除和监视应用程序的状态很有帮助 执行以下命令可以进入Pod命令为metrics-server-85bd976946-rlk6c容器中:
kubectl exec -it -n kube-system calico-kube-controllers-6c99c8747f-5z2wb sh
kubectl cp 允许您在本地文件系统和 pod 中的容器之间或同一 pod 中的两个容器之间复制文件和目录。这对于在主机和容器之间传输文件,或者在 pod 内的容器之间复制文件非常有用:
kubectl cp <local-file-path> <pod-name>:<container-destination-path>
参数: