What can we learn from the Facebook outage?Facebook has revealed the cause of their 6-hour outage: human error. I hope those pesky humans learned their lesson! Or is there more to it?
If you’re like me, you may not have noticed the Facebook/Instagram/WhatsApp outage first-hand. But you’re probably not like me, and you probably found this outage to be a personal nuisance.
Now that things are returning to normal, Facebook has given us a small glimpse into what happened:
Our engineering teams have learned that configuration changes on the backbone routers that coordinate network traffic between our data centers caused issues that interrupted this communication.
Ah! Human error. Those pesky humans. I hope they learned their lesson.
Not so fast.
It may be easy and tempting to point to the human (or group of humans) who made a configuration error, and call it a day. But at the end of the day, the vast majority of technical failures, be it in IT systems, aircraft accidents, automobile accidens, or burned cupcakes, come down to human error. If we end our investigation there, we’ll never really improve.
So if the human who made the configuration mistake is not to blame, who or what is?
Here are a series of questions you can ask next time you’re faced with this delimma, to put you on the track to a more “human-proof” system:
- Why did the system allow a human to make an erroneous configuration change?
- Why was a human error able to have such a broad impact?
- What safeguards can we put in place to prevent such errors from occurring?
- What systems can we put in place to detect such errors before they cause a catastrophic failure?
- What backups can we put in place so that when there’s a similar failure, we can continue to operate?
- How can we improve the system so that we can detect such failures more quickly in the future?
- How can we recover from such failures more quickly next time?
I’m sure you can use your imagination to double or triple this list. The point is: Even when human error is involved (and it usually is), that should never end your investigation, or be considered the root cause. Do a blameless postmortem, and solve every problem twice.
Hidden dependencies and the Fastly outage
Next time you have an unexpected outage, take note, document it, and consider coming up with a mitigation strategy.
A Look at Atlassian's April 2022 Jira Outage
What lessons can you take from this incident for your own organization?