Reader responses to ORM series
I’ve received several reader comments in response to the last week’s emails about databases and ORMs. I love that. Keep the comments coming! Two comments in particular, both about PostgreSQL, I thought were worth digesting and sharing: PostgreSQL comes with a rich set of JSON functions, which can make it work as a quite capable JSON document store. This can be valuable if you already have PostgreSQL installed, and don’t want to complicate your infrastructure by adding MongoDB or similar.
ORMs complicate your application
I remember the first time I was introduced to an Object-Relational Mapper (ORM). It was Perl’s DBIx::Class, which was the default database access layer with Catalyst MVC framework. At first it seemed cool. It was sexy! I could magically convert my boring relational data into native Perl objects! But within a year or so, I was bumping into some problems. A typical conversation between my and my work colleague would go something like this:
Can we build our SQL queries?
Whenever I talk to someone about giving up their beloved ORM, one objection inevitably comes up: I need my ORM to generate SQL for me. If I dig deeper, I usually get one of these underlying reasons: The desire for SQL-agnosticism. That is, the same code could work with MySQL, PostgreSQL, SQLite, or any other SQL flavor. They’re not familiar with all the nuances of SQL queries, joins, etc. There are both arguably valid concerns.
You're ORMing wrong
Object-Relational Mapping (ORM) is pretty ubiquitous these days. Many people take for granted that you’ll be using an ORM in your app. The question is usually “Which ORM should we use?” not “Should we use an ORM?” This mindset leads to many misuses and abuses of ORMs. Today I’ll talk about one of them. Imagine you’re selling widgets in a web store. You store the widget data in a table in your relational database.
The 3 Rules of TDD (Plus bonus rule)
“Uncle” Bob Martin authored the Three Rules of TDD, which is enough to get started with TDD: You are not allowed to write any production code unless it is to make a failing unit test pass. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
Test-Driven vs Test-First
I have observed a lot of confusion out there about the term Test-Driven Development (or TDD). Let me demonstrate with a paraphrased comment from a former colleague: I tried Test-Driven Development once, and it was a disaster, and a huge waste of time. I spent a week writing all the test cases, then when I got to writing the code, I realized most of my test cases weren’t right, and had to be rewritten anyway.
The one valid use for a code coverage percentage
After reeling against using a test code coverage percentage to measure the value of your tests, let me offer one small exception. There is one way in which I find test coverage to be useful: comparing the delta of test coverage against expectations. Strictly speaking, this doesn’t need to be a measure of coverage percentage. It can be a report of raw lines (or branches, or statements) covered. The idea is: I don’t generally expect this number to go drastically down (or for that matter, up).
Subscribe to the Daily Commit
Every day I write about improving software delivery at small companies like yours. Don't miss out! I will respect your inbox, and honor my privacy policy.Unsure? Browse the archive.
Properties of good unit tests
I’ve said calculating code coverage is overrated. But I’m also a strong believer in good testing. So what makes for good tests? This is a nonexhaustive list of some charactaristics I look for when writing tests, or reviewing tests others have written. What would you add? Independent. A test should work by itself, or when executed with others. Tests that depend on state configured by previous tests are broken. Deterministic. Obviously a test that has a non-deterministic result is not a good test.
Complete code coverage
Yesterday I said that code coverage percentages are overrated. But there’s something in the concept of “code coverage” that gets at something valuable. Often when asked what percentage of code coverage I aim for, I respond with “Complete code coverage.” Which is not the same as 100%. If I were forced to define my concept of complete code coverage, it would probably go something like this: Every meaningful input condition is tested, under every meaningful state.
Code coverage percentage is overrated
“Percent code coverage” seems to get a lot of attention. A common question, anywhere programmers congregate, seems to be “What’s the ideal test coverage percentage?” Some say 80%. Some “purists” say 100%. What’s the right answer? Well, what do these numbers represent? Typically (depending on the exact language/tooling you’re using) it represents the percentage of lines, conditional branches, or statements executed during the execution of a test suite. In theory, proponents say, 100% test coverage would mean that you’ve tested 100% of possible execution paths.
A randomly failing test is a failing test
You’ve just spent several hours building an awesome new feature for your application. You wrote automated tests for it. You manually tested everything in the staging environment. You even asked your colleague to stress-test it for you, and they couldn’t make it crash. Perfect! Now you push it up to your version control system, and… A test fails! Not one of your new tests. An old test. Dagnabbit! You must have introduced a regression somewhere along the line.