A randomly failing test is a failing test
You’ve just spent several hours building an awesome new feature for your application. You wrote automated tests for it. You manually tested everything in the staging environment. You even asked your colleague to stress-test it for you, and they couldn’t make it crash. Perfect!
Now you push it up to your version control system, and…
A test fails!
Not one of your new tests. An old test. Dagnabbit!
You must have introduced a regression somewhere along the line.
So you poke around. Maybe you roll back some things, or pull out the git bisect hammer. You can’t find the problem. Finally, out of frustration, you make some trivial change to your code, and push it to your VCS again. This time the tests pass.
!@#$%
This is the havoc wreaked by randomly failing tests.
A randomly failing test is a failing test, and ought to be treated as one. Not only can it waste time, as in my example, it can often obscure some other failure. This means in actuality, a randomly failing test is often worse than no test at all.