Christmas Breaks

Deploying early & often is addictive. Frequent deploys make it easier to identify regressions, tighten feedback loops, and get fixes to customers sooner. So I’m happy that the Cash App team has tests & tools for frequent deploys.

All this deploying has a catch: we don’t notice slow memory leaks! Consider:

  fun runHourlyReports() {
    for (report in reports) {
      val executor = Executors.newSingleThreadExecutor()
      if (!report.isEnabled()) continue

We run this function once an hour and when we do it leaks a thread for each disabled report. If we deploy Friday at 1pm and not again until Monday at 1pm, the completing process will have 72 unnecessary threads

72 leaked threads is bad, but without careful monitoring we might miss it! That amount of wasted memory won’t crowd out other features!

Changing the Cadence

We might not do any deploys between December 23 and January 3. Lots of people are out of office and there’s no trigger to deploy if code hasn’t changed. By January 3, the process will have 264 leaked threads! That’ll degrade performance because less memory is available for useful work.

If your service is behaving poorly, check how long it’s been running! A restart might be an easy fix.

It’d be nice to not have slow leaks. But there’s a simpler solution: don’t let processes run so long. Ideally our platform cycles in and out processes whether or not we’re deploying. Functions-as-a-service does this and it’s one of the many reasons why that model is the future.