I have a bunch of pods in kubernetes which are completed (successfully or unsuccessfully) and I'd like to clean up the output of kubectl get pods
. Here's what I see when I run kubectl get pods
:
NAME READY STATUS RESTARTS AGEintent-insights-aws-org-73-ingest-391c9384 0/1 ImagePullBackOff 0 8dintent-postgres-f6dfcddcc-5qwl7 1/1 Running 0 23hredis-scheduler-dev-master-0 1/1 Running 0 10hredis-scheduler-dev-metrics-85b45bbcc7-ch24g 1/1 Running 0 6dredis-scheduler-dev-slave-74c7cbb557-dmvfg 1/1 Running 0 10hredis-scheduler-dev-slave-74c7cbb557-jhqwx 1/1 Running 0 5dscheduler-5f48b845b6-d5p4s 2/2 Running 0 36msnapshot-169-5af87b54 0/1 Completed 0 20msnapshot-169-8705f77c 0/1 Completed 0 1hsnapshot-169-be6f4774 0/1 Completed 0 1hsnapshot-169-ce9a8946 0/1 Completed 0 1hsnapshot-169-d3099b06 0/1 ImagePullBackOff 0 24msnapshot-204-50714c88 0/1 Completed 0 21msnapshot-204-7c86df5a 0/1 Completed 0 1hsnapshot-204-87f35e36 0/1 ImagePullBackOff 0 26msnapshot-204-b3a4c292 0/1 Completed 0 1hsnapshot-204-c3d90db6 0/1 Completed 0 1hsnapshot-245-3c9a7226 0/1 ImagePullBackOff 0 28msnapshot-245-45a907a0 0/1 Completed 0 21msnapshot-245-71911b06 0/1 Completed 0 1hsnapshot-245-a8f5dd5e 0/1 Completed 0 1hsnapshot-245-b9132236 0/1 Completed 0 1hsnapshot-76-1e515338 0/1 Completed 0 22msnapshot-76-4a7d9a30 0/1 Completed 0 1hsnapshot-76-9e168c9e 0/1 Completed 0 1hsnapshot-76-ae510372 0/1 Completed 0 1hsnapshot-76-f166eb18 0/1 ImagePullBackOff 0 30mtrain-169-65f88cec 0/1 Error 0 20mtrain-169-9c92f72a 0/1 Error 0 1htrain-169-c935fc84 0/1 Error 0 1htrain-169-d9593f80 0/1 Error 0 1htrain-204-70729e42 0/1 Error 0 20mtrain-204-9203be3e 0/1 Error 0 1htrain-204-d3f2337c 0/1 Error 0 1htrain-204-e41a3e88 0/1 Error 0 1htrain-245-7b65d1f2 0/1 Error 0 19mtrain-245-a7510d5a 0/1 Error 0 1htrain-245-debf763e 0/1 Error 0 1htrain-245-eec1908e 0/1 Error 0 1htrain-76-86381784 0/1 Completed 0 19mtrain-76-b1fdc202 0/1 Error 0 1htrain-76-e972af06 0/1 Error 0 1htrain-76-f993c8d8 0/1 Completed 0 1hwebserver-7fc9c69f4d-mnrjj 2/2 Running 0 36mworker-6997bf76bd-kvjx4 2/2 Running 0 25mworker-6997bf76bd-prxbg 2/2 Running 0 36m
and I'd like to get rid of the pods like train-204-d3f2337c
. How can I do that?
Best Answer
You can do this a bit easier, now.
You can list all completed pods by:
kubectl get pod --field-selector=status.phase==Succeeded
delete all completed pods by:
kubectl delete pod --field-selector=status.phase==Succeeded
and delete all errored pods by:
kubectl delete pod --field-selector=status.phase==Failed
If this pods created by CronJob, you can use spec.failedJobsHistoryLimit
and spec.successfulJobsHistoryLimit
Example:
apiVersion: batch/v1kind: CronJobmetadata:name: my-cron-jobspec:schedule: "*/10 * * * *"failedJobsHistoryLimit: 1successfulJobsHistoryLimit: 3jobTemplate:spec:template:...
You can do it on two ways.
$ kubectl delete pod $(kubectl get pods | grep Completed | awk '{print $1}')
or
$ kubectl get pods | grep Completed | awk '{print $1}' | xargs kubectl delete pod
Both solutions will do the job.
If you would like to delete pods not Running, it could be done with one command
kubectl get pods --field-selector=status.phase!=Running
Updated command to delete pods
kubectl delete pods --field-selector=status.phase!=Running
As previous answers mentioned you can use the command:
kubectl delete pod --field-selector=status.phase=={{phase}}
To delete pods in a certain "phase", What's still missing is a quick summary of what phases exist, so the valid values for a "pod phase" are:
Pending, Running, Succeeded, Failed, Unknown
And in this specific case to delete "error" pods:
kubectl delete pod --field-selector=status.phase==Failed
Here's a one liner which will delete all pods which aren't in the Running
or Pending
state (note that if a pod name has Running
or Pending
in it, it won't get deleted ever with this one liner):
kubectl get pods --no-headers=true |grep -v "Running" | grep -v "Pending" | sed -E 's/([a-z0-9-]+).*/\1/g' | xargs kubectl delete pod
Here's an explanation:
- get all pods without any of the headers
- filter out pods which are
Running
- filter out pods which are
Pending
- pull out the name of the pod using a sed regex
- use
xargs
to delete each of the pods by name
Note, this doesn't account for all pod states. For example, if a pod is in the state ContainerCreating
this one liner will delete that pod too.
Here you go:
kubectl get pods --all-namespaces |grep -i completed|awk '{print "kubectl delete pod "$2" -n "$1}'|bash
you can replace completed with CrashLoopBackOff or any other state...
I think pjincz handled your question well regarding deleting the completed pods manually.
However, I popped in here to introduce a new feature of Kubernetes, which may remove finished pods automatically on your behalf. You should just define a time to live to auto clean up finished Jobs like below:
apiVersion: batch/v1kind: Jobmetadata:name: remove-after-ttlspec:ttlSecondsAfterFinished: 86400template:...
Here is a single command to delete all pods that are terminated, Completed, in Error, etc.
kubectl delete pods --field-selector status.phase=Failed -A --ignore-not-found=true
If you are using preemptive GKE nodes, you often see those pods hanging around.
Here is an automated solution I setup to cleanup: https://stackoverflow.com/a/72872547/4185100