Friday, October 25, 2013

System accident


System accident
A system accident is an "unanticipated interaction of multiple failures" in a complex system. This complexity can either be technological or organizational, and often has elements of both. A system accident can be very easy to see in hindsight, but very difficult to see in foresight. Ahead of time, there are simply too many possible action pathways.

These accidents often resemble Rube Goldberg devices in the way that small errors of judgment, flaws in technology, and insignificant damages combine to form an emergent disaster. System accidents were described in 1984 by Charles Perrow, who termed them "normal accidents", as having two main characteristics: interactive complexity and tight coupling. James T. Reason extended this approach with human reliability[2] and the Swiss cheese model, now widely accepted in aviation safety and healthcare.

Once an enterprise passes a certain point in size, with many employees, specialization, backup systems, double-checking, detailed manuals, and formal communication, employees can all too easily recourse to protocol, habit, and "being right."  (more)



No comments:

Post a Comment

All points of view are welcome, but comments with excessive bad language and/or personal attacks will be deleted. Commenting on posts older than 5 days has been disabled.