Initial inputs determine outcomes in emergencies

Botterell’s Law #2: “The Problem Is at the Input”

By Art C. Botterell

A plan is a machine for turning a bad situation into a better one. Given a hazardous event, we want to get back to safety. Given a lack of information, we want to achieve understanding. Given hunger, we want lunch. And so on.

That means our plans depend greatly on the initial conditions, the starting point, the inputs. Plans make certain assumptions about those “givens.” And the Second Law is a reminder that when our plans, our processes, our systems fail us, more often than not it’s because that initial reality didn’t match our assumptions.

In computer science, “Garbage in, garbage out” is a time-worn maxim. I first reframed that principle (and its corollary, “Nothing in, nothing out”) as the Second Law to offer a debugging guide for communications and warning systems. It’s relatively rare that our telecommunications technology fails us, if only because we have so much of it.

Much more frequently, the trouble is that the givens, the inputs we expected, turn out either to be not what we expected, or never arrive at all.

For example, as part of a recently released study of tsunami warning systems in the United States, the National Research Council examined the events of June 14, 2005, when an earthquake off the California coast caused two separate tsunami warning centers, one in Alaska and another in Hawaii, to issue messages.

The Alaska center, responsible for the U.S. West Coast, issued a tsunami advisory. The Hawaii center, responsible for the rest of the Pacific Basin, issued an advisory saying there was no tsunami hazard.

Both messages were correct for their respective target areas, but both went out within minutes of each other over the same NOAA data circuit. The result was confusion and no small amount of finger-pointing afterwards.

The problem was multiple disparate inputs to a system the users assumed had only one input at a time. Users of the NOAA Weather Wire system, including emergency managers and media outlets, were accustomed to receiving single authoritative messages from whatever forecast office was responsible for their area or, in the case of severe storms and hurricanes, from a single specialized forecasting center.

Thus many of them assumed that there would be a single source for any event, and that any subsequent message would be an update from that same source. In this particular case, the nature of the inputs was different (coming from multiple sources with different responsibilities) from what was expected.

More often, especially in the case of public warning systems, the expected input never arrives at all. For a variety of reasons there can be great reluctance to actually issue a public warning.

The usual justifications are either a (scientifically unfounded) fear of causing panic or a degree of uncertainty regarding the threat. In most cases, I suspect, the real root problem is fear of embarrassment or political repercussions.

Whatever the reason, all our investments in warning technologies frequently come to nothing, because the expected input from the responsible official never arrived to set them off.

About more than gadgets

But the Second Law applies to much more than just technology. Take our emergency operations plans and, as a particular example, our EOC activation procedures.

Most disaster plans provide for someone to make a judgment, based on information received, that it’s necessary to disrupt routine operations and convene folks at the EOC. That can be expensive and politically unpopular, especially if not everyone agrees that the situation actually warrants interrupting their regular schedule (see Botterell’s Fifth Law: “The Worst Case is the Easiest”).

As a result, the threshold for believing that EOC activation is really necessary tends to be fairly high. Some events, like earthquakes, are intrinsically self-notifying, but many aren’t.

As a result, the EOC and all the plans and procedures that rely on the EOC are contingent on the right person getting the right information soon enough to ensure that the EOC is opened timely.

But in case after case we hear afterward, “If we’d known we would have opened the EOC sooner.” The problem wasn’t with the EOC plan, the problem was at the input to the whole process.

No shortage of inputs

In many cases the input is information, but not always. How many houses have burned to the ground because the expected water pressure wasn’t there at the hydrant?

For that matter, how much emergency equipment bought with DHS grants has decayed and even failed because the assumed input of maintenance money turned out not to be forthcoming? And might the same be said of much of our public infrastructure, our highways and bridges?

The point of the Second Law is that the single most vulnerable part of any plan is the one the planners either don’t control or take for granted: the inputs.

One afternoon in North Carolina after a hurricane I got to walk around with a senior White House Communications Agency advance man as he surveyed the venue for a Presidential visit. “What could go wrong here?” he kept muttering, over and over. “What could go wrong here?” It’s not a bad mantra for any of us.

It’s reassuring to have plans, but those plans are vulnerable if the initial conditions and inputs we planned for don’t materialize. We need to be constantly asking ourselves what will happen if we don’t get the information, the people, the equipment, the behavior, the political support, the time that we expected.

It’s normal to approach planning in the form of “Given X, we will do Y.” Since a plan is a machine for getting from one situation to another, that may be inherent in the nature of plans.

But at the same time, we need to be constantly testing and reexamining our assumptions about those “givens.” Because when our plans fail, most often we’ll discover that the problem was at the input.