Medgadget editor Dan Buckland is in training to become a physician while trying to remain an engineer. Here he talks about how his training in different thinking styles leads to different failure analysis strategies.
In my last article I defined three archetypes, Physicians, Scientists, and Engineers, and discussed how they approach problems differently. This post continues along the problem solving timeline and discusses how the different archetypes respond to unexpected data or failure. While similar to problem-solving failure analysis is more of an introspective skill set that entails a different set of biases. Something possibly went wrong, and the task is to find the error, adding a dimension of responsibility that the discussion of problem solving in the previous post doesn’t necessarily address. Also, presenting at the meetings described below is a potential outcome being considered in many people’s mind when contemplating a new idea. How are they going to defend this decision if things don’t go well? And in what environment are they defending it?
The Physician: When MDs talk about failure or mishaps formally it is usually in a meeting called “Morbidity and Mortality,” referred to by the initials “M&M.” The M&M is considered so important to the medical profession that the ACGME (Accreditation Council for Graduate Medical Education), the organization responsible for accrediting residencies in the US, requires them to be held regularly during a physician’s training. In the ideal form of these meetings a case that resulted in an undesired outcome is presented to the entire medical staff of a department or organization. Then the group will ask questions of the responsible staff to try and determine whether the outcome was avoidable and, if so, where the fault lies. Many states specifically (1) exempt these meetings from being used to determine legal liability in an effort to highlight that their use is not for blame laying, but for quality control and education. In these meetings blame is often assigned, sometimes even to the patient, and often times it is accepted during these discussions that are considered to be within a peer-group. A lot depends on the level of respect within the group, but in my experience what prevents these meetings from turning accusatory is that the Physician starts from an assumption of competence. In contrast to the other two groups, Physicians are more likely to give the benefit of the doubt to their colleagues. The questions during these sessions are very collaborative, and the questioners often start out by saying that they don’t know what they would have done differently in the same situation. However, this approach can be limiting to new ideas. Often it is easier to defend a choice if it followed the standard of care, even if that choice was objectively worse for the patient.
The Scientist: Most scientific groups have some variation of the “Weekly Lab Meeting.” In these small group sessions, a group member will often present some in-progress project or recent data. Confusing results and unexpected data are presented, with the hope that the group can provide technical support or advice for future experiments. If the results are unexpected, it is common to question if the experiments were performed properly and if all the appropriate controls were conducted in order to establish the validity of the tests. Here, the questions do not assume the competence of the presenter without data to support that the techniques were properly executed. This is one form of “peer-review” and is the basis for the quality control that goes on in science. In fact, the Royal Society’s (The UK’s Academy of Sciences) motto is “Nullius in verba,” which is roughly “Don’t take anyone’s word for it” (2), including your closest colleagues. These meetings can become very heated, but usually what stops someone from being too cruel is that they know they have to stand in front of the same group at some point in the future and present their own data. However, if you ask most academic scientists, they can usually tell you a story of a lab meeting where someone went too far and a grad student or post-doc was found crying in a cubicle later.
The Engineer: Once again the Engineer functions somewhat midway between the two previous groups. The names differ between industries and organizations, but most groups have some form of weekly or monthly “Incident Reviews” (3). In these meetings “failure” can mean anything from a catastrophic collapse of a whole system to a validation test where some components performed outside of specifications. In contrast to the problem solving approach, here the Engineer is all about finding the root cause of the error. Often a no-fault approach is taken, in order to facilitate individuals to speak up without fear of blame. In systems involving human users many Engineers are trained to follow a “Swiss Cheese” type model of fault analysis. In this theory it is rarely one error that causes a failure, rather it is several errors, each from a different source, that align in such a way that allows the failure to occur without being caught ahead of time. The underlying assumption of this theory is that failure is inevitably going to occur at some point with the way a system is set up, so it is not appropriate to level all the blame for failure at the final fault when all of the precipitating errors are to blame as well. In these meetings, similar to Physicians, Engineers assign responsibility but not blame (4). Similar to Scientists they are interested in the root cause of a failure and don’t start with an assumption of competence. However, as mentioned, in the Engineer’s formulation it is not necessarily the individual’s fault they didn’t have competence. In these Incident Reviews the meetings are usually kept orderly by a top down approach and it is considered the program manager’s or system engineer’s responsibility to prevent the meetings from becoming acrimonious.
These small group experiences often translate to presentations at large national meetings. Having been to national meetings of all three professions I have noticed that questions at engineering conferences tend to be more confrontational than at medical conferences, but less confrontational than at scientific conferences. While not failure analysis, the peer-review aspect of these meetings is at times critical, and tends to use the framework of each archetype’s trained method.
As mentioned above, failure analysis is different than the problem solving discussed in my last article, but there are similarities in approaches. In my next article I will try to bring everything together and will propose some ideas on how Physicians, Scientists, and Engineers can work better together using real-world organizations as models.
Footnotes:
(1) See Connecticut’s Medical Liability laws.
(2) Don’t want to take my word for it? Good.
(3) Other names are Lessons Learned, Fault Tree Analysis, Wishbone, or Root Cause Analysis depending on the method used. Ironically, engineers in the non-medical field often call them “Postmortems.”
(4) Hopefully this doesn’t give the reader the sense that Engineers don’t feel a sense of obligation to prevent failure with their work. They do. Next time you meet a Canadian engineer, ask them about their Iron Ring.