Testing Rego With enforced code coverage

A ruby script we use to test our Rego policies. They need to be in the policies/ folder. Each line that is not exercised by tests will make it fail.

desc "Test policies"
task test: ["update:opa"] do
  output = `opa test --coverage --verbose policies/* 2>&1`
  abort output unless $?.success?

  coverage = JSON.parse(output).fetch("files")
  errors = policy_files.flat_map do |policy|
    return [policy] unless result = coverage[policy] # untested

    (result["not_covered"] || []).map do |line|
      start = line.dig("start", "row")
      finish = line.dig("end", "row")
      "#{policy}:#{start}#{"-#{finish}" if start != finish}"
    end
  end
  abort "Missing coverage:\n#{errors.join("\n")}" if errors.any?
end

Simple Kubernetes Leader Election via Entrypoint script

Leader election in kubernetes is often done via sidecars + Endpoints or leases, which is a lot of complexity comapred toConfigMap based locking (as used by operator-sdk), it also avoids having the leader move around during execution.

kube-leader produces a downloadable binary, that implements leader election via a docker EXTRYPOINT. Add it to your Dockerfile, add kubernetes env vars/permissions and you are done.

Verify Pagerduty reaches On-Call by Cron

We had a few incidents were on-call devs missed their calls because of various spam-blocking setups or “do not disturb” settings.
We now run a small service that test-notifies everyone once a month to make sure notifications go through. Notifications go out shortly before their ‘do not disturb’ stops so we do not wake them in the middle of the night, but still have a realistic situation.
Our setup has more logging/stats etc, but it goes something like this:

# configure user schedule
require 'yaml'
users = YAML.load <<~YAML
- name: "John Doe"
  id: ABCD
#  cron: "* * * * * America/Los_Angeles" # every minute ... for local testing
  cron: "55 6 * * 2#1 America/Los_Angeles" # every first Tuesday of the month at 6:55am
# ... more users here
YAML

# code to notify users
require 'json'
require 'faraday'
def create_test_incident(user)
  connection = Faraday.new
  response = nil
  2.times do
    response = connection.post do |req|
      req.url "https://api.pagerduty.com/incidents"
      req.headers['Content-Type'] = 'application/json'
      req.headers['Accept'] = 'application/vnd.pagerduty+json;version=2'
      req.headers['From'] = 'realusers@email.com' # incident owner 
      req.headers['Authorization'] = "Token token=#{ENV.fetch("PAGERDUTY_TOKEN")}"
      req.body = {
        incident: {
          type: "incident",
          title: "Pagerduty Tester: Incident for #{user.fetch("name")}, press resolve",
          service: {
            id: ENV.fetch("SERVICE_ID"),
            type: "service_reference"
          },
          assignments: [{
            assignee: {
              id: user.fetch("id"),
              type: "user_reference"
            }
          }]
        }
      }.to_json
    end
    if response.status == 429 # pagerduty rate-limits to 6 incidents/min/service
      sleep 60
      next
    end
    raise "Request failed #{response.status} -- #{response.body}" if response.status >= 300
  end
  JSON.parse(response.body).fetch("incident").fetch("id")
end

# run on a schedule (no threading / forking)
require 'serial_scheduler'
require 'fugit'
scheduler = SerialScheduler.new
users.each do |user|
  scheduler.add("Notify #{user.fetch("name")}", cron: user.fetch("cron"), timeout: 10) do
    user_id = user.fetch("id")
    incident_id = PagerdutyTester.create_test_incident(user)
    puts "Created incident for #{user_id} https://#{ENV.fetch('SUBDOMAIN')}.pagerduty.com/incidents/#{incident_id}"
  rescue StandardError => e
    puts "Creating incident for #{user_id} failed #{e}"
  end
end
scheduler.run

AWS Rolling Replace Kubernetes Nodes without App errors/downtimes

We do a few things to keep the apps on our clusters from crashing during rolling instance replacements, so we can keep the overall noise and deployment errors low. I just wanted to write them up since some of them are not very intuitive and the combination of all of them makes for a really safe cluster updates.

  • Setup PodDisruptionBudget for all deployments so they stay available no matter what combination of nodes we take down
  • When  AWS AutoScalingGroup wants to update (before it takes down instances) we sends a autoscaling hook to SQS and drain the instance, then signal to continue (separate lille app we built but very simple overall)
  • When a new instances is booted, we taint it so no pod lands on a broken node (infrastructure pods with tolerations like CNI are booted quicker too since the node only has a few pods in to begin with)
  • Aws rollingupdate waits for the “all good” signal that we send when the node is finally ready and only then we untaint (nice side effect is also that nodes that never get ready stop the RollingUpdate and causes a Rollback)
  • We determine ready with custom plugins in node-problem-detector which has a few issues but overall works alright.