I originally wrote this post for the OpsLevel blog. Published here with permission.
The term “DevOps” entered the IT industry in 2009 with the first [DevOpsDays](https://legacy.devopsdays.org/events/2009-ghent/) event held in Ghent, Belgium.
But the world is constantly changing. Since 2009, the IT space has shifted dramatically. Containers, microservices, and “serverless” computing have all taken the world by storm in the last decade. The term “DevOps” has also undergone a sort of transformation, though OpsLevel is bringing it back to its roots of Service Ownership.
Is DevOps still relevant? Or are we perhaps now in some sort of post-DevOps landscape?
DevOps, as it was originally intended, is absolutely still relevant! In fact, it was relevant before the term “DevOps” was coined. And it will remain relevant far into the future. For as long as people are doing development and operations, these principles will apply. Of course, it’s anyone’s guess whether we’ll continue to use that word in the future.
Let’s examine a few reasons that DevOps is relevant and timeless.
“DevOps” is a fancy name for “Cooperation”
Since its inception, the underlying goal of DevOps has always been to tear down the proverbial silos around development and operations. It’s about aligning the goals of development with the goals of operations. It’s about finding creative and effective ways to work together.
Put another way, DevOps is about cooperation.
Cooperation has been the timeless foundation of human civilization for millennia before the term “DevOps” was coined. And it will continue to be for as long as humans as we know them exist.
The unique thing DevOps has offered us in this area isn’t the idea that cooperation is somehow suddenly a new and good idea. Rather, it’s been a simple lens to identify areas where we sometimes fail to cooperate.
DevOps helps focus on quality
If the “how” of DevOps includes “cooperation”, then the “what” must include things like better-quality software and systems. It may feel good to cooperate all day long, hold hands, and make dandelion tiaras. But unless we’re producing something of value through our cooperation, no business is going to get behind the concept.
The DevOps lens helps us focus on improving the quality of our software and systems in a number of ways. From automated verification to release automation and system monitoring, DevOps offers us the concept of measuring and improving quality.
DevOps is about faster delivery
The other general category of value improvement that DevOps aims for is faster delivery. Practices like continuous integration, continuous deployment, test automation, and monitoring all help the disparate parts of a software or systems delivery chain cooperate so that we deliver value faster. And faster delivery of business value will not be going out of style any time soon!
We can likely agree the goals of DevOps are timeless. But what about the specific practices? A lot has changed in the technology landscape since 2009. What are the DevOps practices of 2021? How do they contribute to the goals of higher quality and faster delivery through cooperation?
Let’s look at some modern DevOps practices and how they may have evolved over the last decade or so.
Continuous Deployment (CD)
Where else to start, other than continuous deployment? It was, after all, the concept of 10+ deploys per day that kicked off the whole DevOps movement.
In 2009, we thought the idea of multiple deployments per day was novel. Many (myself included) first learned of the concept with giddy skepticism: “10 deploys per day sounds amazing! But it’s impossible on this project.”
Deployments are messy things, so the thinking went. They sometimes require hours of preparation, to say nothing of the long testing cycles we need to go through.
A decade ago, we were asking, “Is it possible for us to do continuous deployment?” Now most of us are asking “Is it a good idea for us to do continuous deployment?” The barrier to entry, so to speak, is no longer technological. It’s a question of will and business strategy. And while this debate may last another decade, momentum is definitely in favor of continuous deployment, or its close cousin, continuous delivery, being considered appropriate for virtually all software projects.
Continuous Integration (CI)
“Continuous integration” is one of the few terms that’s likely even more misunderstood than “DevOps”. Many people think continuous integration is the automated tests that run when you create a pull request, which then give you a green check mark, indicating that you haven’t broken anything vital.
Despite popular belief, continuous integration is actually about (drumroll, please) … integrating changes, continuously!
Martin Fowler says it about as clearly as possible (emphasis added):
Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily…
There’s no mention here of automated tests, or even pull requests.
Let’s also look at the explanation of continuous integration offered in Kent Beck’s 2004 book, Extreme Programming Explained (2nd edition):
Continuous Integration — The latest code is built every night. The nightly builds provide us with insights about cross-component integration problems. Once per week we do an integration build where we ensure integrity across all components.
Notice that the practice of continuously integrating our code changes has nothing to do with the method of testing. Computers and compilers are much faster today than they were 20 years ago. This means we can do much more frequent execution of tests. In another decade, perhaps we’ll be doing real-time integration as we type, in our IDEs. Tools continue to evolve to make us more productive, but the goal of continuous integration is constant.
Day 2 operations
In the “good old days,” it was up to the operations team to handle the monitoring and maintenance of production systems and respond to alerts. This might have been good news if you were a developer, as you never had to bother with on-call rotations. But if you were in operations, it was frustrating because you might be debugging crashes caused by developers.
A common DevOps approach these days is to enable developers to manage their own services. We see this summarized in the mantra, “build it, ship it, own it.” We give them the tools to update, roll back, monitor, and respond to their own alerts. Sometimes called the “platform-as-a-service” approach, the operations team provides a platform that allows developers to deploy, monitor, and manage their own services. The operations team in this model is still responsible for the operations of that platform—the physical servers, network, storage, etc. From the application layer up, the development team assumes responsibility for the reliability, performance, and maintenance of the service.
This model fits cleanly into the “cloud-native” approach but isn’t limited to it. A platform-as-a-service approach could just as well be employed on bare metal servers.
Day 2 operations have changed drastically over the last few decades, and they continue to evolve. Serverless architecture may be the next big wave, but it won’t be the last. Once again, the tools evolve, but the concepts do not. DevOps helps us no matter what.
This is not a thorough survey or history of DevOps. I could write many more pages on the continuing relevance of DevOps principles now, and into the future. But in broad strokes, we can see that DevOps is here to stay, no matter what name we use. The only question is this: in what new ways will we apply these principles as time marches on?