Errors and “Experts”

Moving Towards Modern Medical Education and Training — Part 5


When we review the historic error rates for radiology trainees, we begin to understand the challenge to achieving expert performance.

Waite, et. al. in an excellent recent review article concerning medical errors in diagnostic image interpretation (1) informs us that, “Garland, in groundbreaking work in 1949, reported the error rate in diagnostic imaging interpretation to be 33.3% in positive studies” (2). When the denominator is shifted to include all positive and negative studies that might be seen “in a day’s work”, the error number is diluted to approximately 4%, with no accounting for the effect of false positive exams.

This initial work by Garland was in the days well before what might be called “complex imaging” were introduced. Garland’s data was related to the interpretation of chest x-rays. Currently, thousands of images per study are generated by way of computed tomography, MRI and ultrasonography, creating vast opportunities for error. The cognitive task is enormous. Added to that raw image number, long workdays and high RVU assignments clearly lead to a cognitive overload.
 
That overload is a well-recognized contributing factor to the observed error rates in diagnostic imaging interpretation. In addition, this modern imaging technology, which began in the 1970’s, has an arguably higher medical decision-making and outcome impact than the less complex studies of the pre-“high-tech” imaging era.

Finally, this imaging interpretation rate of error is likely equivalent to the error rate in medicine as a whole (1); this is a further indication of a general challenge in medicine that begs to be met by improved graduate, postgraduate and lifelong competency-based education and evaluation.
 
In the following 70 years, the error rate in diagnostic imaging interpretation, proven by many studies to be reasonably accurate at somewhere around 30%, has not declined. (1, 3) Repeatedly rehashing the debate about what might or might not constitute an acceptable rate of error will not mitigate the reality of such mistakes.

Such enlightened discussion as that seen to date only frames a very significant problem: is the task of reducing this error rate insurmountable? The scope of the problem has not changed for 70 years (1–3) so does this constitute a tacit acceptance of these circumstances?

  • Does this mean that competency in the practice of medicine, as judged by peer equivalency, allows for an error rate of this magnitude to be acceptable?
  • Is allowing the persistence of such a rate of mistakes a goal of our graduate and postgraduate medical education and training?

Of course it isn’t, and our patients would also not agree with such thinking. It is time for a consistent and deliberate response to reduce this seemingly “sticky” number. A logical and consistent approach to reduce the rate of mistakes is clearly in order.

At the outset of our work about 8 years ago, at the University of Florida College of Medicine Department of Radiology, we set out to create a competency based evaluation rubric. We wrote down, in a spreadsheet, what we believe to be the 600–800 individual competencies (imaging scenarios) in critical care radiology.

We proposed that mastery in these individual conceptual competencies should reasonably define an overall competency in critical care radiology. As a corollary, we considered that proof of true mastery would ultimately create a population of defined experts, engaged in the interpretation of diagnostic imaging studies in the domain of critical care imaging. Such expertise should lead to a significant reduction in the rate of mistakes in this particular practice domain.
 
The core of our methodology is establishing predictable, disciplined search patterns that contribute to fact-based diagnostic synthesis, as opposed to highly biased synthesis, and that the fact-based discipline will contribute meaningfully to the advancement of accurate medical decision-making. This approach mirrors that of the Weed “coupler” theory discussed earlier in this series of essays (4–8). This discipline, we have observed, when internalized by trainees, results in a general improvement in their work product and professional development outside of the critical care domain.
 
More specifically, since 2010, the UF Department of Radiology has tested about 190 of these individual, conceptual critical care competencies (imaging scenarios) in seven UF Simulations delivered in cooperation with the American College of Radiology (ACR) (Table 3B-1). The ACR became a developmental partner for the delivery of the simulation about 4 years after our own initiation, testing and development of the concept.

The simulation’s purpose was triggered by the proposal of a requirement to fulfill the ACGME Milestone of proving whether the trainees are adequately prepared for the Entrustable Professional Activity (EPA) of independent resident imaging study interpretation; this allows for remote attending radiologist supervision during after-hours practice. This EPA goal was then scheduled for implementation in 2018 (9).

Our intent was to have a generally available, objective, reproducible and reliable tool to be responsive to this goal, and in place well before the go live date. The tested individual competencies have been distributed as follows:

Crossover competencies in these organ systems include instances seen in the pediatric population, as well as other crossover topics such as those inherent in vascular and trauma scenarios. Over 500 residents in over 30 programs throughout the United States have participated in these Simulations. The size of individual programs runs anywhere from about 5 to 12 residents per year with a reasonable mix of smaller, medium, and larger programs included.
 
Each simulation includes 65 cases to be completed over an eight hour remotely supervised “shift” experience. (Figure 3B-2) In each of the 65 cases, all of the DICOM images for the case are submitted and interpreted using a complete suite of workstation diagnostic tools including full multiplanar and 3D capabilities; that functionality currently made possible by the Visage Corporation.

The trainees determine whether the study is normal or abnormal. If abnormal, the trainees must type into a simulated online consultation form exactly what they believe to be the essence of the information the study contributes to medical decision-making and what they would communicate in that regard to the referring provider. Beyond that, they are asked to type into the online form whatever suggestions for further evaluation or additional insights they might provide that would be specifically relevant to the case at hand.

Figure 3B-2. Components of competency for critical care imaging interpretation and communication

More specifically in a posited abnormal study, this interpretive exercise must include a suggested diagnosis or differential diagnosis, and if appropriate likely next imaging steps as well as establishing the level of required communication depending on the acuity of each particular scenario. In other words, the trainee is documenting consulting and reporting activity, as it occurs routinely, by an integrated part of the care team.

To date almost 50,000 written responses to individual diagnostic imaging scenarios have been graded and analyzed. The overall accuracy rate of this self-discovery Simulation exercise for residents in the second half of their R2 year is 66% with a range of about 48% to 84%. The mean accuracy rate value just given produces a 34% error rate which bears an uncanny similarity to the 33.3% error rate first revealed by Garland in 1949 (11) and consistently confirmed since 1949. (10,12)
 
In 2018, it will be required that radiology programs trainees demonstrate readiness for independent interpretation of imaging studies and that readiness be objectively documented (9). However, the Milestones documents to date do not specify acceptable true competency evaluation rubrics for such a proposed “objective” documentation.

The ACGME milestones “Envelope of Expectations” embody the concept of demonstrable competency (Figure 3B-1). Those ACGME expectations assume, based on the following published information, such competency/ readiness will be reached toward the end of the second year of training or sometime early during the third year and, before the “Entrustable Professional Activity” of shift coverage with long-distance supervision is allowed.
 
Summary analysis of the simulation data in the following graph (Figure 3B-3), for programs who have tested residents in all 4 years, illustrates the mean test score range as a surrogate for accuracy in consulting and reporting performance:

Figure 3B-3. Performance by resident year (R1=first year of radiology) for programs testing residents in all 4 years. The average case score with 95% confidence intervals (A) as well as the percent of cases with score greater than 3 and greater than 7 (B) are shown.

These data suggest a plateau (Figure 3B-3) that might be reasonably expected to persist beyond training without a true ongoing commitment and opportunity for “Deliberate Practice”.

Therefore, we must consider whether this flattening performance level indicates that postgraduate training programs accept such an error rate in defining competency of their trainees as they embark upon the rest of their careers. Such consideration seems to be reasonable since error rates have not really improved given the current methodologies employed in continuing medical education and the evaluation rubrics employed to “validate” ongoing sustained improvement in competence toward expertise.

These persistent error rates, across all 30 programs participating in the simulations, can be viewed as “achievement gaps” or “education opportunity gaps”. We prefer to view them as educational opportunity gaps, in part created by the haphazard nature of our curriculum in diagnostic radiology training, the lack of a completely defined curriculum and deficiency in our observational training methodology.
 
We have also analyzed the root cause of errors in just over 27K of the 43.5K responses to the Simulation. There is a roughly fourfold odds of observational errors (~75%) over interpretive errors (~20%) with the remaining 5% consisting of combined observational and interpretive errors. (13)

I am on a mission to modernize post graduate medical education. With my team at the University of Florida, we have spent the last eight years developing a competency based curriculum and evaluation for radiology, based on modern learning theory. In this essay series, Moving Towards Modern Medical Education and Training, I examine in detail the pathway to modern learning and educational theory, and the outcome of the application of modern learning principles in this sphere of medical education.

Part 1: Medical Education: How Did we Arrive at the Current State

Part 2: “See one do one teach one”

Part 3: Teaching to the Test

Part 4: Competency or Passing the Boards? Every patient wants an expert.


References:
 
1- Stephen Waite, Jinel Scott, Brian Gale, Travis Fuchs, Srinivas Kolla and Deborah Reede: Interpretive Error in Radiology AJR 2017; 208:739–749
2- Garland, in groundbreaking work in 1949 Garland LH. On the scientific evaluation of diagnostic procedures. Radiology 1949; 52:309–328 9. Berlin L. Accuracy of diagnostic procedures: has it improved over the past five decades? AJR 2007; 188
3- Berlin L. Accuracy of diagnostic procedures: has it improved over the past five decades? AJR 2007; 188:1173–1178
4-Weed, L. L. (1964–06–01). “MEDICAL RECORDS, PATIENT CARE, AND MEDICAL EDUCATION”. Irish Journal of Medical Science. 462: 271–282.
 5-Weed, L. L. (1968–03–14). “Medical records that guide and teach”. The New England Journal of Medicine. 278 (11): 593–600. doi:10.1056/NEJM196803142781105. ISSN 0028–4793. PMID 5637758.
6-Weed, L. L. (1968–03–21). “Medical records that guide and teach”. The New England Journal of Medicine. 278 (12): 652–657 concl. doi:10.1056/NEJM196803212781204. ISSN 0028–4793. PMID 5637250.
7-Weed LL. Medical records, medical education, and patient care: the Problem-Oriented Medical Record as a basic tool. 1970. Cleveland (OH): Press of Case Western Reserve University.
8-Jacobs L. Interview with Lawrence Weed, MD — the father of the problem-oriented medical record looks ahead [editorial]. Perm J 2009 Summer; 13(3):84–9.
9- Lori Deitte, MD Vice Chair for Education — Department of Radiology, Vanderbilt University — personal communication
10-Knowles, M. (1984). The Adult Learner: A Neglected Species (3rd Ed.). Houston, TX: Gulf Publishing.
11-Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development (Vol. 1). Englewood Cliffs, NJ: Prentice-Hall.
12-Bloom, B., Englehart, M. Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York, Toronto: Longmans, Green.
13 — Sistrom, et. al. Unpublished data

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.