Ralph Peck wrote an article titled. “Where has all the Judgment Gone?” If you haven’t read that, please stop reading this and go read that. I forgive you for leaving.
This article does not pretend to hold a candle to that paper. It’s just a different article. It’s almost entirely based on a fascinating book I just finished reading, titled, “Design Paradigms: Case Histories of Error and Judgment in Engineering”. It was written by Henry Petroski who teaches at Duke University.
Most of the book pertains to the scaling up of ships and bridges over time, and how a lack of good judgment led to the failure of many of these “bigger and better” designs. Petroski opines, convincingly, that significant bridge failures occur on the average every 30 years. That’s pretty shocking!
It’s a paradox that you can’t have good judgment unless you have significant experience in your field. Here, I try to share some of the wisdom that was captured in the book. In a way, this can be considered the Cliff’s Notes of the book. By the way, I include no references in this article. If you want references, you’ll have to get them from Petroski’s book.
Petroski quotes Lev Zetlin, who stated:
“Engineers should be slightly paranoic during the design stage. They should consider and imagine that the impossible could happen. They should not be complacent and secure in the mere realization that if all requirements of the design handbooks and manuals have been satisfied, the structure will be safe and sound”.
Petroski states that “improved reliability in design will only come when our already highly developed analytical, numerical, and computational design tools are supplemented with improved design-thinking skills. While artificial intelligence and expert systems have been promised as solutions to the problem of human error, the design of computer-based methods will itself benefit from an understanding of human error and how to reduce it”.
In the book’s single reference to Ralph Peck, Petroski writes that “nine out of ten recent (dam) failures occurred not because of inadequacies in the state of the art, but because of oversights that could have been avoided”. Peck pointed out that the “problems are essentially nonquantitative” and that the “solutions are essentially non-numerical”. Peck acknowledged that improvements in analysis and testing might be profitable, but felt that it was also likely that “the concentration of effort along these lines may dilute the effort that could be expended in investigating the factors entering into the causes of failure”.
Petroski write that a new design “may prove to be successful because it has a sufficiently large factor of safety (which, of course, has often rightly been called a ‘factor of ignorance’), but a design’s true factor of safety can never be known if the ultimate failure mode is unknown”. That’s pretty profound. Let’s let that sink in.
TO BE CONTINUED