AWS Rolling Replace Kubernetes Nodes without App errors/downtimes

We do a few things to keep the apps on our clusters from crashing during rolling instance replacements, so we can keep the overall noise and deployment errors low. I just wanted to write them up since some of them are not very intuitive and the combination of all of them makes for a really safe cluster updates.

  • Setup PodDisruptionBudget for all deployments so they stay available no matter what combination of nodes we take down
  • When  AWS AutoScalingGroup wants to update (before it takes down instances) we sends a autoscaling hook to SQS and drain the instance, then signal to continue (separate lille app we built but very simple overall)
  • When a new instances is booted, we taint it so no pod lands on a broken node (infrastructure pods with tolerations like CNI are booted quicker too since the node only has a few pods in to begin with)
  • Aws rollingupdate waits for the “all good” signal that we send when the node is finally ready and only then we untaint (nice side effect is also that nodes that never get ready stop the RollingUpdate and causes a Rollback)
  • We determine ready with custom plugins in node-problem-detector which has a few issues but overall works alright.

Leave a comment