Isn't it wasteful to build after every commit?

Continuous delivery provides some very cheap insurance.

I recently had lunch with a CTO friend of mine. He told me that at his company, they’re not currently doing continuous delivery. When I asked why, he told me that it seemed unnecessary. That building the entire system after every commit or merge just seemed like a waste of resources when they weren’t prepared to deploy to production yet.

I expect this is a common sentiment, so I thought it would make sense to talk about it.

What are the benefits of contiuous delivery (creating a deployable artifact for every change) when you aren’t using continuous deployment (deploying every change immediately to production)?

Even without deploying the software anywhere, continuous delivery buys us assurance that the build process is always in a working state, and that we can deploy at any moment.

Without continuous delivery, we may end up with broken deployment and not realize it until a significant amount of time later. There’s any number of reasons a deployment can break:

  • Code changes (that don’t break automated tests)
  • Configuration changes (YAML typo? Missing environment variable?)
  • Infrastructure changes (docker registry access changes? Password changes? DNS changes?)
  • Third-party dependency updates
  • AWS outage
  • Phase of the moon

If you aren’t validating that every change builds, you’re increasing the chances that each of these may occur when you do get around to building. And then it’s much harder to untangle the mess to determine which change(s) may be contributing to the build failure. In fact, this gets at the heart of the scientific method: Modify a single independent variable at a time.

And the cost is usually pretty log. Assuming your software can be built in 10 minutes or less, for example, the cost per deployment on GitHub Actions is less than 10 cents USD ($0.008 per build minute). That’s some pretty cheap insurance!

Share this