#### Deleting stuck Kubernetes workloads in Rancher v2.

##### How to handle workloads that won't delete and are stuck in "waiting on foregroundDeletion."
###### July 4, 2018
rancher kubernetes

I had a large number of workloads that I deleted with kubectl delete. They all fell into a state of “Waiting on foregroundDeletion.”

I believe that this happened because I had scaled them to 0 replicas a day ago while testing a different version of the application running inside. Workloads with no replicas would not delete. If I scaled it to 1 replica and immediately deleted the workload, the deletion passed through foregroundDeletion and completed.

I was able to finalize the deletion of the stuck workloads by patching the deployment and removing the finalizer key (Note that monitor-1a matched the stuck workloads):

kubectl -n monitoring get deploy | grep monitor-1a | awk '{ print $1 }' | while read x; do kubectl -n monitoring patch deploy/$x -p '{"metadata":{"finalizers":null}}'; done


For the remaining workloads that still had 0 replicas I scripted the scale/delete, and they all deleted fine (Note that monitor-1c matched the workloads that were not stuck but still had 0 replicas):

kubectl -n monitoring get deploy | grep monitor-1c | awk '{ print $1 }' | while read x; do kubectl -n monitoring scale deploy/$x --replicas=1; sleep 1; kubectl -n monitoring delete deploy/\$x; done