When ignorance does more than you think

Control room

Unstudied conditions are avoided as vigilantly as possible—right now, when it matters—by control room operators of large critical infrastructures mandated to operate reliably and safely systemwide. Having failed to fail because an operator was behaving ignorantly is orthogonal to high reliability management.

That said, ignorance has differentiated functions in large socio-technical systems—but in ways not captured by the happy-talk of trial-and-error learning, Experiment!, and innovation-starts-with-ignorance.

Five under-recognized positives deserve highlighting:

(1) A longstanding proposition in organization theory and management has been that operators and managers cannot know everything and something like bounded rationality is required in order to decide and manage. More, a mandate for comprehensive decision-making would undermine reliability management at the complex system level, not enhance it. It is in these senses that the operations of other infrastructures with which a control room is interconnected are “unstudied conditions” for that control room. Either these connected services are there or, if not, this has to be worked around by that control room. Real-time management by a control room is so knowledge-intensive that its operators cannot be expected to understand just intensively how the other interconnected infrastructures and their control centers operate.

(2) The comfort zone of control room operators includes managing nonmeasured or unmeasurable uncertainties so as to stay out of unstudied conditions—unknown unknowns—about which system operators are by definition ignorant. The uncertainties are not denominated as calculable risk, but still operators may know more about consequences than likelihoods, or vice versa. Operators undertake uncertainty management because they differentiate uncertainties—albeit outsider experts often collapse uncertainties into ignorance per se.

(3) Large system control operators do innovate, and positively so, within their comfort zone. We see their improvisation in control room assembly of options just-in-time under conditions of high volatility (high unpredictability or uncontrollability in the outside environment). In fact, the evolutionary advantage of control rooms lies in the skills and expertise of its operators to operationally redesign in real time what is otherwise inadequate technology in order to meet the reliability mandates of the infrastructure.

There is a kind of learning-through-error-management going on, but the learners do so by avoiding having to test the limits of system survival.

What control operators of critical infrastructures do not do—or resist doing—is classic trial and error learning and experimentation. Why? Because professionals will not deliberately chance the first error becoming the last trial. Certainly the view—“It’s almost impossible to innovate if you’re not prepared to fail”—is orthogonal to the innovation-positive we observed in critical infrastructures.

(4) That said, some unknown-unknowns may be key to something like an infrastructure’s immune system for managing under risk and uncertainty. The complex and interconnected nature of large socio-technical systems suggests that “low-level” accidents, lapses or even sabotage may be underway that systemwide reliability professionals – like control room operators and their support staff – do not (cannot) observe, know about, or otherwise appreciate. This is less “ignorance is bliss,” than ignorance as mithridatic (immunizing through difficulty and inexperience rather than, say, homeopathically).

(5) Last but not least: When unstudied conditions and unknown-unknowns are feared because of the awful con0sequences associated with behaving ignorantly, the ensuing dread promotes having to manage dangerous complex technologies more reliably and safely than theories of tight coupling and complex interactivity suggest. Wide societal dread of systemwide failure takes on a positive function in these cases, without which the real-time management of dangerous technologies would not be warranted, let alone warrantable.

(It’s at this point that someone complains I’m advocating “the manufacture of dread for the purposes of social control through taken-for-granted technologies.” Which is oddly unreflexive on their part if they really believe what they say, since the very infrastructures they criticize enable them to render such judgment, here and now, and since their criticisms are presumably then a form of artificial negativity manufactured for the same social control.)

The upshot of the five features is this. There are cases where experimentation and innovation are recast in the face of unstudied conditions. The resulting differences, however, are many and vary substantially from what outsiders typically narrow down to Experiment! Adapt! Be resilient! Indeed, when you think about any valorized list of “key strategies important in the face ignorance,” you realize just how conservative are many outsider imaginaries: If such lists are said to capture almost everything really important, then maybe nothing’s all that important after all.


This article was first published on Emery Roe’s blog. It is one of a series of blog posts by participants following the STEPS symposium The Politics of Uncertainty: Practical Challenges for Transformative Action.


UncertaintyUncertainties can make it hard to plan ahead. But recognising them can help to reveal new questions and choices. What kinds of uncertainty are there, why do they matter for sustainability, and what ideas, approaches and methods can help us to respond to them?

Find out more about our theme for 2019 on our Uncertainty theme page.