Speeding up kubectl with –raw

time kubectl get pods >/dev/null # 12s
time kubect get --raw /api/v1/pods?resourceVersion=0  >/dev/null # 2s

Just using –raw is already faster, but telling the api that you want to read from cache with resourceVersion=0 speeds it up even more and takes load of the api-server. (be careful that reading from cache and using limit= don’t mix)

(Using –raw is similar to using curl directly, but it has the advantage of looking up the api server for you)

OPA Gatekeeper Rego for Istio Port Name convention in Kubernetes

We check that all our Service objects match Istios port convention (start with http- or named http for example). For this we developed a policy that rejects any ports that don’t match and allow opting out via namespace labels.

package k8svalidistioserviceportname

violation[{"msg": msg}] {
  valid := "^(grpc|http|http2|https|mongo|mysql|redis|tcp|tls|udp)($|-)"
  service := input.review.object
  port := service.spec.ports[_]
  not valid_port(port, valid)

  msg := sprintf(
    "%v %v %v: port name must match %v to be routable by Istio",
    [service.kind, service.metadata.namespace, service.metadata.name, valid]
  )
}

valid_port(port, valid) {
  port.name
  re_match(valid, port.name)
}
package k8svalidistioserviceportname

test_ignores_exact_match {
  count(violation) == 0 with input as {"review":{"object":{"kind":"Service","metadata":{"name":"truth-service","namespace":"mesh-enabled"},"spec":{"ports":[{"name":"https"}]}}}}
}

test_ignores_prefix_match {
  count(violation) == 0 with input as {"review":{"object":{"kind":"Service","metadata":{"name":"truth-service","namespace":"mesh-enabled"},"spec":{"ports":[{"name":"https-foobar"}]}}}}
}

test_blocks_bad_match {
  count(violation) == 1 with input as {"review":{"object":{"kind":"Service","metadata":{"name":"truth-service","namespace":"mesh-enabled"},"spec":{"ports":[{"name":"httpsfoobar"}]}}}}
}

test_blocks_empty {
  count(violation) == 1 with input as {"review":{"object":{"kind":"Service","metadata":{"name":"truth-service","namespace":"mesh-enabled"},"spec":{"ports":[{}]}}}}
}

test_blocks_multiple_bad {
  count(violation) == 1 with input as {"review":{"object":{"kind":"Service","metadata":{"name":"truth-service","namespace":"mesh-enabled"},"spec":{"ports":[{}, {}]}}}}
}

Simple Kubernetes Leader Election via Entrypoint script

Leader election in kubernetes is often done via sidecars + Endpoints or leases, which is a lot of complexity comapred toConfigMap based locking (as used by operator-sdk), it also avoids having the leader move around during execution.

kube-leader produces a downloadable binary, that implements leader election via a docker EXTRYPOINT. Add it to your Dockerfile, add kubernetes env vars/permissions and you are done.

AWS Rolling Replace Kubernetes Nodes without App errors/downtimes

We do a few things to keep the apps on our clusters from crashing during rolling instance replacements, so we can keep the overall noise and deployment errors low. I just wanted to write them up since some of them are not very intuitive and the combination of all of them makes for a really safe cluster updates.

  • Setup PodDisruptionBudget for all deployments so they stay available no matter what combination of nodes we take down
  • When  AWS AutoScalingGroup wants to update (before it takes down instances) we sends a autoscaling hook to SQS and drain the instance, then signal to continue (separate lille app we built but very simple overall)
  • When a new instances is booted, we taint it so no pod lands on a broken node (infrastructure pods with tolerations like CNI are booted quicker too since the node only has a few pods in to begin with)
  • Aws rollingupdate waits for the “all good” signal that we send when the node is finally ready and only then we untaint (nice side effect is also that nodes that never get ready stop the RollingUpdate and causes a Rollback)
  • We determine ready with custom plugins in node-problem-detector which has a few issues but overall works alright.