Communicating risk

Happy hour thoughts about a sad event

Tù eres el malo

It feels good when I can walk away from a project with the delusion that I’ve given my users everything they need to use a system effectively and safely.

Once in a while sad news comes for its ransom.

Hours later at a bar:

Norm: “I know someone who knows someone who says that it’s illegal to operate a ship that is known to be unfit to navigate.”

Cliff: “That captain is in trouble. He must have been asleep at the wheel.”

Me: “Maybe. This is tragic. As a society we do a lot to try to prevent these situations, but you’d have to boil the oceans to completely remove the risk. Most likely, the crew lost control because of an unexpected mechanical problem. Less likely are the scenarios that you describe. But we’ll only know once the experts dig through the messy details.”

(Ok, I didn’t really say that. I’m mostly successful at knowing the difference between being accurate and being interesting in social situations. But I did think it.)

As a technical writer, I can’t express Norm or Cliff’s opinions. My users act on my content, and their opinions, despite their intention to prevent future tragedies, could lead to more of them. Only hours after the incident, their opinions aren’t even based on fact.

And as a tech writer, I’d hesitate to express my response to Cliff and Norm. Even if it ends up being closer to the truth, it’s still not actionable. Is a shoulder hunch an action?

I’ve watched many engineers look longingly at the Leave Meeting button during my interrogations into edge cases and paths to error conditions. I make them suffer so I can understand the probability of a risk and its potential impact. I’m not going to waste my user’s time about a low-risk, low-impact situation. I’m definitely going to let users know when the risk, impact, or both are considerable.

To be honest, I’ve never had to warn my users about possible death or injury, even for that one audience who dealt with life and death daily. Important and useful as it was, the system we were documenting for them was on a non-critical path of their day-to-day work lives. And they were already heavily trained to prevent and fix bad situations.

Yeah, I feel obligated to give my users something that helps them. But is this an example where even the best of the best technical doc can’t save the user from the unpredictable?