65 K8S的namespace一直处于terminating状态怎么办

文章目录

        • 一、K8S的namespace一直处于terminating状态怎么办?
          • 1、方法一
          • 2、方法二
          • 3、方法三
          • 4、方法四
            • 1、执行操作:
            • 2、查看:

一、K8S的namespace一直处于terminating状态怎么办?

目前公司使用的k8s版本是v1.18.2,由于规定将namespace命名为项目名称,故:将不规范的namespace做了修改,由于是一个小项目在变更日做了如下修改,使用粗暴的方式:

`[root@k8s-master1 ~]``#kubectl delete namespace gf-prod``[root@k8s-master1 ~]``# kubectl get ns``NAME             STATUS    AGE``aammini            Active    353d``cattle-impersonation-system  Active    150d``cattle-system         Active    150d``default            Active    2y10d``gf-prod            Terminating  62d  异常状态`
1、方法一
# 第一种办法:
[root@master01 ~]#  kubectl delete namespace   
# 第二种方法:
首先可以尝试使用–force --grace-period=0 参数强制删除
[root@master01 ~]#  kubectl delete namespace   --force --grace-period=0
但是在某些情况下,即使命名空间下没有运行的资源,但依然无法删除Terminating状态的命名空间的情况,它会一直卡在Terminating状态下,即使用–force --grace-period=0 也删除不了。
2、方法二
# patch命令
[root@master01 ~]#  kubectl patch ns/ -p '{"metadata":{"finalizers":[]}}' --type=merge
同理其它资源也可以
kubectl patch crd/ -p '{"metadata":{"finalizers":[]}}' --type=merge
先导出json的方法
# 导出namespace配置保存为json文件
kubectl get namespace <terminating-namespace> -o json >tmp.json
#编辑这个json文件,删除finalizers 字段为kubernetes 的值,编辑完这部分的值如下。
#编辑之前
"spec": {
    "finalizers": [
        "kubernetes"
    ]
},
#编辑之后

"spec": {
    "finalizers": [
     ]
},
#启动一个临时的proxy
#kubectl proxy
Starting to serve on 127.0.0.1:8001
# 通过将json文件PUT 到API 修改配置
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/<terminating-namespace>/finalize
#删除spec及status部分的内容还有metadata字段后的","号,切记!

  "spec": {
    
  },
 至此,这个状态一直处于terminating 的namespace 应该是被删除了。验证 kubectl get namespaces
3、方法三
# 1、直接修改etcd(删除前先备份etcd)
删除terminating-namespace

[root@master01 ~]# etcdctl del /registry/namespaces/
4、方法四

(1)尝试了强制删除 kubectl delete namespace [namespace] --force --grace-period=0 和通过接口删除不怎么好使,下来有时间再做详细研究。

(2)直接问我们公司大佬,平常在工作中帮助我不少,感觉欠人家许多人情,有机会一定回报。(这个是我的心理话,大家勿喷!)

# 我的删除方式,直接拿来用,总感觉是好使的
kubesphere-monitoring-federated替换为真实的命名空间
[root@master01 ~]# kubectl get namespace kubesphere-monitoring-federated -o json \
            | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" \
            | kubectl replace --raw /api/v1/namespaces/kubesphere-monitoring-federated/finalize -f -
1、执行操作:
[root@master01 ~]# kubectl get namespace kubesphere-monitoring-federated -o json \
>             | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" \
>             | kubectl replace --raw /api/v1/namespaces/kubesphere-monitoring-federated/finalize -f -
{"kind":"Namespace","apiVersion":"v1","metadata":{"name":"kubesphere-monitoring-federated","uid":"d110c54a-14bf-4a6e-9875-01f889650006","resourceVersion":"54597","creationTimestamp":"2023-03-19T14:24:13Z","deletionTimestamp":"2023-03-19T15:37:03Z","deletionGracePeriodSeconds":0,"labels":{"kubernetes.io/metadata.name":"kubesphere-monitoring-federated","kubesphere.io/namespace":"kubesphere-monitoring-federated","kubesphere.io/workspace":"system-workspace"},"ownerReferences":[{"apiVersion":"tenant.kubesphere.io/v1alpha1","kind":"Workspace","name":"system-workspace","uid":"d2599c3e-37dc-48b5-a61d-5d37801a483f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kubectl-create","operation":"Update","apiVersion":"v1","time":"2023-03-19T14:24:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:kubernetes.io/metadata.name":{}}}}},{"manager":"kubectl-label","operation":"Update","apiVersion":"v1","time":"2023-03-19T14:24:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{"f:kubesphere.io/namespace":{},"f:kubesphere.io/workspace":{}}}}},{"manager":"controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-19T14:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"d2599c3e-37dc-48b5-a61d-5d37801a483f\"}":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-19T15:37:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"NamespaceContentRemaining\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceDeletionContentFailure\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceDeletionDiscoveryFailure\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceDeletionGroupVersionParsingFailure\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceFinalizersRemaining\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"}]},"spec":{},"status":{"phase":"Terminating","conditions":[{"type":"NamespaceDeletionDiscoveryFailure","status":"False","lastTransitionTime":"2023-03-19T15:37:09Z","reason":"ResourcesDiscovered","message":"All resources successfully discovered"},{"type":"NamespaceDeletionGroupVersionParsingFailure","status":"False","lastTransitionTime":"2023-03-19T15:37:09Z","reason":"ParsedGroupVersions","message":"All legacy kube types successfully parsed"},{"type":"NamespaceDeletionContentFailure","status":"False","lastTransitionTime":"2023-03-19T15:37:09Z","reason":"ContentDeleted","message":"All content successfully deleted, may be waiting on finalization"},{"type":"NamespaceContentRemaining","status":"False","lastTransitionTime":"2023-03-19T15:37:09Z","reason":"ContentRemoved","message":"All content successfully removed"},{"type":"NamespaceFinalizersRemaining","status":"False","lastTransitionTime":"2023-03-19T15:37:09Z","reason":"ContentHasNoFinalizers","message":"All content-preserving finalizers finished"}]}}
2、查看:
[root@master01 ~]# kubectl  get ns
NAME                   STATUS   AGE
default                Active   29h
dev                    Active   44m
kube-flannel           Active   29h
kube-node-lease        Active   29h
kube-public            Active   29h
kube-system            Active   29h
kubernetes-dashboard   Active   29h
            Active   29h
kube-system            Active   29h
kubernetes-dashboard   Active   29h

你可能感兴趣的:(kubernetes,docker,容器)