My Most Controversial Opinions

January 16, 2020

Edited by Taavi Kivisik

Happy New Year to everyone!

I was excited to kickstart the new year with a new position at Lana, a Spanish FinTech startup. As part of my first week on the job, I met a candidate for another position there, and we started talking about controversial opinions in IT. Unfortunately, we found nothing to disagree about. Although I was inspired to catalog a number of my own personal opinions about software development, which may be more controversial.

Feel free to agree or disagree with anything I say here. And I’d love to hear your own controversial opinions–especially if you disagree with me. And if you can convince me that you’re right, I’ll buy you a beer! 😀

1. ORMs (Object-Relational Mappers) are terrible

Early in my programming career, I was convinced by a more experienced peer that an object-relational mapper (ORM) makes application development easier. At the time we were using Perl’s Catalyst framework, along with its ORM DBIx::Class. After what I remember was a few weeks of evaluation, he had me convinced. The active record pattern seemed nice, and not having to manually craft SQL queries was also a bonus.

Or so I thought.

Within a year, it had become common practice to write an SQL query by hand, then spend hours, sometimes days, figuring out how to craft the proper ORM query to produce the same result.

In addition, a majority of our performance issues came from the “easy” queries the ORM provided for us. They were utterly terrible in performance.

Others, in particular Jeff Atwood, and before him, Ted Neward have famously said that ORMs are the Vietnam of Computer Science. I completely agree with their harsh critique, and encourage you to read their articles if you have time (Neward’s in particular is a long read).

My TL;DR; version goes like this:

ORMs make easy things easy, and difficult things impossible.

I cannot imagine any scenario in which an ORM is the right tool for the job. None. Zero.

There are times when an SQL query builder is the right tool. In such a case, use just a query builder, and not an entire ORM library.

Further, if you really need objects so badly, consider an object store instead of a relational database. CouchDB is a good choice for many such workloads.

2. MVC isn’t that great either

My feelings about the Model-View-Controller (MVC) pattern aren’t nearly as strong as my opinion about ORMs. MVC was recently made popular by Ruby on Rails, and has been copied many times. Catalyst also uses MVC, and is where I first learned about it.

Disregarding the fact that modern MVC, particularly in web development, bears practically no resemblance to the historical meaning of the term, the common pattern is usually silly.

I’m completely in favor of proper separation of concerns, which is ostensibly what MVC aims to provide. But in reality, it’s never quite that simple. Part of this failing is due to the fact that modern applications attempt to cram all business logic into a paradigm originally designed to handle only a UI. But I digress.

Every MVC application I’ve ever seen outside of a classroom or tutorial, has been a mess. Typically, there’s a very thin View layer. Most applications just use a simple HTML template generator, or if they’re using client-side rendering, the view is even thinner: a REST API of sorts.

This leaves everything else to be split between the Model and the Controller. Some will (rightly, in my opinion) argue that all database-specific code belongs in the Model. And while it goes squarely against the original intent of MVC, every application I’ve ever seen has the controller prepare data to be passed on to the view.

This leaves a huge question of where does business logic belong? I’ve seen business logic in controllers, in models, and also in some fourth, unnamed segment of business logic and “other stuff”. (Perhaps we should more accurately call this pattern MVCx?) And what’s worse: I often see all three patterns in the same project.

In summary: MVC is nearly worthless for modern web applications. The MVC-style code organization doesn’t map to the real world, leaving an ambiguous mess for everyone to re-invent, all the while pretending they’re following some best-practice or standard.

3. OOP is cancer

If I haven’t pissed you off enough yet by dissing ORMs and MVCs, let me mount a broader attack against another 3-letter acronym.

Many authors have thought a lot harder about this than I have, and have done a great job of articulating why object-oriented programming (OOP) is misguided. Here are a few links for your perusal:

Let me summarize what I think are the main reasons OOP is bad:

  1. Inheritance doesn’t map to the real world

    Very few real-world relationships fit into the neat hierarchies that OOP assumes are fundamental to reality. A dog may be a more specific instance of an animal, but it becomes irrelevant the second you step foot outside of your OOP classroom. What matters when programming about dogs is the characteristics of a dog (its data) and what it can do (its methods). The inheritance model is unnecessary complication.

  2. OOP leads to “worst practices”

    A lot has been written about the so-called “best practices”, which most of us take for granted these days. This includes the many books about various Design Patterns, and even SOLID principles. Much of this is a based on a false belief. The fallacy isn’t, per se, that these practices are bad. The fallacy is that they’re necessary. The majority of these practices exist for the sole purpose of fixing OOP.

    When you leave OOP behind, the need for many of these practices simply vanishes. Many Go developers recognize this, especially when they try to help new Gophers (yes, that’s the proper name for one who codes in Go) who are constantly asking how to write factory methods in Go, or how to implement inheritance. These are bad habits, forced upon us by OOP.

  3. Strict OOP languages impose a reality-distortion field on your brain

    Now not all languages are strictly OOP. JavaScript, for example, lets you choose your paradigm, using objects for some things, procedural code for others, and even functional coding in some areas.

    Some other languages, such as Java, on the other hand, have the ridiculous idea that literally everything in the universe is an object. This means that for any action to be performed, it must be done by an actor. That might seem reasonable at first. You want to add tax, you need a TaxAdder. Where does that come from? The TaxAdderFactory, of course. We can’t just have an action called AddTax. No, no, no! We worship objects, so an object must do this action for us!

    Steve Yegge does a brilliantly humorous job of exposing this nonsense in his post Execution in the Kingdom of Nouns.

Even the inventor of the term “Object-Oriented Programming” dislikes it:

“I made up the term ‘object-oriented’, and I can tell you I didn’t have C++ in mind.”
— Alan Kay, OOPSLA ‘97

4. GitFlow is anti-agile

GitFlow is a git branching model which has become popular in many organizations. I honestly don’t know why.

It encourages silos and hand-offs, increases feedback times, and generally adds complexity where it’s not needed. In short, it encourages waterfall development practices. And that’s to say nothing of GitFlow’s technical deficiency: By requiring every piece of code to be merged into at least three branches (more for “hot fixes”"), it renders git history practically unusable. It’s very cumbersome to trace a change back to where it originated.

That may be a reason it was created. It was likely created to streamline existing waterfall methodologies.

But any team striving for agility should run away from GitFlow like the plague. Trunk-based development is much simpler, addresses every use-case of GitFlow, and tears down silos, rather than adding them.

I will probably write a dedicated post about this eventually, because it’s one of the easiest problems many teams could fix to improve productivity. Until then, there are others who have written on the topic:

5. Scrum is anti-agile

I expect to get a lot of flak for this one. And I may be exaggerating, but only a bit.

Scrum is often considered almost a synonym to “Agile”. I think this is a big mistake. Even Scrum.org doesn’t equate them in their What is Scrum? explanation. It says:

“Scrum is a framework within which people can address complex adaptive problems, while productively and creatively delivering products of the highest possible value.”

And later mentions “Scrum and agile software development techniques” as two distinct things.

So if I don’t think Scrum is agile, how do I view Scrum?

Well, I think Scrum.org’s description is more or less accurate (although biased). Scrum is a framework for addressing complexity, primarily in product development. And while many of the ideals that Scrum aims to achieve are compatible with, and in some cases directly taken from, the Agile manifesto, there’s absolutely no guarantee that implementing Scrum will make your team agile.

In fact, based on my observations and experience, there’s a negative correlation between Scrum and agile. That is to say, adopting scrum often makes a team less agile.

This is in part because Scrum is so precisely described, with story points, fixed-length sprints, and the various ceremonies. It’s also in part because many teams, perhaps most, don’t realize there’s more to the picture. They feel that once they’ve “achieved Scrum” that they have “achieved Agile”. This is practically never the case.

The first value in the agile manifesto is “Individuals and interactions over processes and tools”. And while Scrum pays lip service to this, in practice it puts tools and processes in the driver’s seat (scrum being that tool/process). I would go so far as to say that any prescriptive “agile framework” has the same problem. In that sense, agile frameworks are, inherently, anti-agile.

I could go on and on about this, but one last point. Again, from the agile manifesto, the final value is “Responding to change over following a plan”. This is manifest in Scrum in the ceremony of the retrospective. But if you take this value, and the Scrum retrospective to heart, your team will constantly be changing how it works, and after a couple of cycles, you’ll no longer be following strict scrum.

I know many Scrum practitioners who truly honor the spirit of the agile manifesto, and they do it with Scrum as a tool. I don’t mean to devalue their work in any way. In fact, there are times when even I would choose to implement scrum in a team. My point is just that scrum doesn’t make a team agile, and if you’re not careful it can get in the way of agility, and that scrum is never enough if agility is the goal.

6. Dedicated QA is anti-agile

One of the big changes I made when I worked at Bugaboo was to fire our quality assurance (QA) team.

Okay, that’s a bit over-dramatic. I didn’t actually fire anyone. And I don’t think QA is a bad idea. What I oppose is QA being a separate phase of development, whether on a separate team, or handled by a separate individual in a larger team.

At Bugaboo, we followed a cycle, which by all indications is common in most projects with dedicated QA:

  1. A developer would “finish” some feature and hand it to QA.
  2. A few days later QA would give it back to the developer with a list of problems
  3. Rinse, repeat.

This creates two serious problems:

  1. It creates an inherently adversarial relationship between developers and QA engineers. The success of each party is expressly at odds with the success of the other. Even with emotionally mature people in this sort of a situation, it’s not a good situation. And with less mature people, which are far more common, it often leads to hurt feelings, if not outright blaming or even name-calling or worse.
  2. It increases the lead time of every story. Any student of the Toyota Production System, or similar schools, will recognize that lead time is a function of (among other things) waiting time. Any time a piece of work, such as a user story, crosses work-center boundaries, there’s a potential for additional waiting time. In simpler terms, every time a user story changes hands from dev to QA or QA to dev, there is a delay (unless the QA and dev are otherwise completely idle). Hand-offs are inherently inefficient.

At Bugaboo, it was common that our devs would be busy for the first half of a 2-week sprint while the QA engineers were idle, then for the second half the QA engineers had too much work to do, while the devs were waiting idly for QA feedback.

So I disbanded the QA team, and put our QA engineers into advisory roles. QA was still a part of the process, but it was carried out primarily by the devs who did the work, with the QA engineers available for consultation. The primary responsibility of our QA engineers shifted from doing testing, to supporting testing, and maintaining the automated test tools.

I never heard of another dev/QA fight (which were previously common), and while some of the devs did complain a bit, especially at first, about their larger workload, in the end I believe everyone thought it was a net gain. Even the dev who complained most vocally about having to do more work, in the end said he was glad for the change.

I touched briefly on this topic also in a previous post.

7. Every developer should use TDD

I first seriously tried TDD in 2009. I quickly gave up. It was slow and cumbersome. Although I appreciated unit tests, I found it much easier to write my tests after the fact.

Bit by bit, I found that it was more efficient to write my tests after writing only a small amount of code rather than, say, after completing a feature. Then somewhere along the line, I don’t remember when precisely, I found myself doing TDD.

Now I use TDD on virtually all code I write. There are rare exceptions, but even then, in most cases, by the end, I wish I had done TDD.

Learning the basics of writing good unit tests can take perhaps a few months. But once that investment is made, I have found that TDD makes one a faster programmer, including the time to write tests. The common assumption that TDD makes coding slower, but pays off in the end, is actually wrong. TDD makes coding faster and pays off in the end. I recommend Martin Fowler’s post Is High Quality Software Worth the Cost?, which expands on this topic.

8. Every team should practice Continuous Deployment

I’ve already written a bit on this topic before, but let me be extra clear here:

It is my firm belief that every software development team should practice Continuous Deployment, not merely Continuous Delivery.

That is to say, every time new code is merged into master, it should be immediately and automatically deployed into production such that live customers can use the code.

The only exceptions to this are when government regulations require special audits of the code, or when distributing packaged software (such as a mobile app) that would unduly burden customers if they were to receive updates multiple times per day.

The idea of releasing to staging, then doing further testing, is to cling to waterfall ideals. Move all your testing to before that Merge button is pressed.

The reasons for this are numerous. Just a few I think are most important:

  • Automated deployments means consistent deployments. No more “did you remember to run the migration?” conversations.
  • Regular deployments to production increase confidence in the process. You should never be afraid to deploy on a Friday afternoon.
  • Knowing that hitting the Merge button will result in your code exposed to customers immediately, makes you a much more conservative coder and tester. Without that safety net of “someone will find any bugs before release”, you actually produce better code (I know I do!)
Share this