Science is never settled

December 27, 2021

We hear a lot of things in popular and social media. And if there’s one thing we hear all the time, it’s claims of science proving things, especially in headlines:

  • “Science proves mask mandates save lives”
  • “Science proves Roe v. Wade wrong. Humanity doesn’t start at 20 weeks”
  • “Feeling lonely? Go for a solitary walk in the woods, science says”
  • “This is How the Keto Diet Affects Aging, Science Says”

If you’re at all a skeptic, every one of these headlines probably has you thinking “but the science is never settled!”

This is all equally true when it comes to software design. We actually don’t have a lot of scientific research available to us about what works and what doesn’t in software engineering. But we do have more than zero.

So when we read a claim like Code review is shown to improve software quality, we are right to be skeptical and point out that “the science is never settled.”

But it’s also vital to understand what “never settled” actually means.

“Never settled” doesn’t mean “we have no information about this thing.” It means “There’s still room for better, more precise data.”

Let’s take the Code review claim for example.

While it’s within the realm of theoretical possibility that some new study will come out and demonstrate that not reviewing code is the best way to improve quality, that’s unlikely.

What’s far far more likely is that we’ll gain more insights into why code review was helpful in the cases studied. And maybe we’ll determine that code review of a certain type is more effective than other types. Or maybe that code review for certain types of software, or in certain problem domains, or coupled with certain team structures is either more or less effective.

How should this affect you? How does it affect me?

I take it seriously when I see a legitimate study on the topic of software development. I take the conclusions seriously. But I also recognize that until the study is repeated many times, and under many different types of scenarios (which will probably take decades, if not longer), that the study is probably generally pointing me in the right direction, but probably is not an ideal how-to guide.

  • Teams that practice continuous delivery have consistently better business outcomes.
  • Teams that don’t use GitFlow demonstrate higher performance.
  • Teams that peer review each others code produce fewer defects.
  • Teams that do TDD do not produce fewer defects.
  • Teams that have developers write their own tests have better business outcomes than teams with strictly dedicated testers.

Each of these conclusions is supported by one or more scientific studies on the topic of software delivery. And each is an interesting pointer. But none of them are particularly useful as a how-to guide, especially when you consider the unique situation each team and software project faces.

Is it generally good advice to do continuous delivery? Probably. Is your team, project, code base, or other circumstance an exception? We don’t really know yet. But until we’re faced with evidence to the contrary, we can assume that continuous delivery will help improve business outcomes on most or all software projects.

In summary, the science is never settled. But we can, and should, use it as a guideline anyway. The odds are in your favor if you follow the science, even though it’s not yet settled.


Related Content

Things that blew my mind

The best, game-changing ideas always feel like utter blasphemy the first time I hear them.

Different models of CI/CD

There's almost always more than one way to do something. What workflow does your team use for CI/CD?

We can't afford automation right now

Avoid a big up-front investment in automation by building it piece by piece, as needed.