kubectl kill -9

In some situations it can happen that a pod or a full node is not available anymore but Kubernetes thinks it is still there. For example, I had a case where a worker node was deleted via the cloud provider API (not via Kubernetes) and Kubernetes did not properly notice this interruption. Then I tried to normally delete the pods of the node but Kubernetes kept waiting for the kubelet to confirm the deletion — which never happened because the kubelet did not exist anymore. Obviously this kind of situation should never happen in the first place, but for some reason it did. And whether it happens due to a bug or due to some obscure scenario where this is the expected behavior… it will probably happen again…

In this situation it is necessary to tell Kubernetes “This pod does not exist anymore! It is gone! There will never be a confirmation, it is just gone! Please assume it was properly deleted (and fix the situation by creating it again on a working node)”. So how do we do that? It is surprisingly easy:

kubectl delete pod --grace-period=0 --force --namespace NAMESPACE NAME

But this command can be a bit dangerous. In the end it tells Kubernetes to completely drop the pod from the etcd-server, which means Kubernetes has no knowledge of it at all afterwards. I mean… it’s what I wanted, right? Yes, in this situation it was exactly what I needed, but bad things can happen when the pod returns unexpectedly. For example, when the node with the pod comes back online and the containers of the pod are still running on the node, they will just run there forever — Kubernetes does not know about them anymore and therefore it neither deletes them nor does it show them in the output of kubectl get pods or at any other place. In the best case the running containers will just consume a few resources, but in the worst case they can interfere with the rest of the system.

So if you delete a pod this way, you better be sure that it’s really gone and will not return!


Posted

in

by

Tags: