To explain failure, system models do not need a component to break or a human to err. In fact, they do not have to rely on anything “going wrong,” or anything being out of the ordinary. Even though an accident happened, nothing really might have gone wrong—in the sense that nothing that happened was out of the ordinary. Accidents, in other words, are typically the by-product of the normal functioning of the system, not the result of something breaking down or failing inside of that system. In that sense: • There is not much difference (if at all) between studying a successful or a
...more
Frank Hermens liked this