Real Experts and Tolerable error

Moving Towards Modern Medical Education and Training — Part 6

The radiology literature contains many articles about factors contributing to observational and interpretive errors. It is useful to think about those causes in trying to eliminate them as contributing factors. However, radiology educators must use the summary data presented in Part 5 to consider that we may not be truly teaching our trainees to gather the entire factual basis for calling a study positive or negative, with a high degree of confidence.

Also, our current training methods may not be effectively teaching how to apply those observations/facts in a particular clinical context. Recall the definitions of competency discussed earlier as expressed by Dreyfus for Stage 1: “Novice: Rule-based behavior, strongly limited and inflexible.” It is at this stage that we must exploit the most appropriate adult learning skills to lay down the fundamental basis of diagnostic imaging interpretation and related decision-making; that being logical and reliable, and perhaps scenario driven, observational discipline.

This discipline is currently subject to haphazard educational experiences across the range of our residency programs. Lack of observational discipline clearly leads to error.
New educational approach to address these tenacious shortcomings might begin to improve the situation. Based on our simulation experience, new approaches are necessary so that our trainees can reduce the rate of both types of errors.

There should be an initial, intense focus on the elimination of observational errors, since proper observations are the factual basis for determining whether a study is positive or negative and form the core knowledge for a proper and useful thought synthesis and report when a study is positive.
The current knowledge gaps, whether they are of the achievement or educational opportunity variety, can cause very significant harm. These proven error rates in training suggest peer equivalency and therefore competency at the end of training (as competency seems to be currently defined) allow for a general error of about 20–25%. This is slightly better than the traditionally cited mistake rate of one out of three on positive studies.

Further, careful analysis of the individual competencies error rate in this simulation experience presents a different picture. Table 3B-2 shows specific error rates from 10 selected simulated competency concepts calculated from the pooled simulation data comprising just under 44K total responses.

*”Passing a question” requires 6 out of a possible 10 points per case. Therefore, % below 4 points out of 10 clearly identifies an educational opportunity or achievement gap for that scenario- observational/interpretive or both

Figure 1

Error types for 10 representative case concepts (same data as Table 3B-2). Blue=Observational, Red=Both, Orange=Interpretative. The horizontal axis scale is set such that the right edge of the orange bar falls on the percentage of all cases with score <4.

These are a relatively small subset of specific competencies that showed error rates 2 to 3 times greater than 25 or 30%. These results tell a different story with regard to potential harm than do the aggregate mistake rate figures.

It is generally held that while the error rate may be as high as about 30% in positive diagnostic studies, only about 5% of those errors have the capacity to cause significant harm in the general practice of radiology. It appears that that estimate does not take into account specific competency (scenario) error rates.

These individual error rates should not persist in the high-stakes practice of critical care radiology. They must be systematically discovered and eliminated both during the training and, considering the potential plateau we observe, beyond completion of training.

The discovery of areas in need of targeted improvement in training methodology requires a sufficient competency-based evaluation rubric. Curing the educational opportunity gap requires a new look at how we educate. The ultimate goal would be to markedly reduce the current rate of error that likely remains unchanged for 70 years as suggested in a fairly large body of the radiology literature and which is further reflected in the limited reference material provided here (1–3).
In radiology education the root problem with our “teaching to the test”, and the corollary, residents spending a disproportionate amount of time “studying to the test” is that there is no curriculum.

Our diagnostic imaging educational system relies on very important but haphazard exposure to curriculum during the readout sessions with attending radiologists mainly during regular hours. This is the best part of the current educational environment.

Otherwise the transfer of knowledge is by often sporadically attended case conferences often delivered by trainees themselves (though with faculty input), standard textbook assignments at the discretion of individual programs as well as, lectures and studying of teaching files (also at the discretion of individual programs) with varying levels of access to such resources. This traditional and inadequate approach to diagnostic imaging education does not lend itself to covering a specific curriculum that one can then prove to have mastered.

What then happens to the process of trainee evaluation? The testing rubric defaults also to traditional methods, such as answering multiple choice questions using just a few selected images.

Such an evaluation in no way reflects what an interpreter of diagnostic imaging studies must do, in real life situations, in order to meaningfully contribute to medical decision-making.

The trainees then, without a defined curriculum and with a background in their prior educational system of memorization and regurgitation of facts, default to discovering the likely test content and focusing on “gaming of the test” rather than what really makes a difference to patients, their true clinical competency.

This is entirely understandable.

Most of the trainees, who through their whole educational experience from primary and secondary school up to and including graduate medical education, have learned how to game these types of tests. By middle school, most students have clear understanding of such test gaming techniques.

The postgraduate medical trainees must pass these board examinations in order to practice their chosen specialty after having spent many years in training and nowadays accumulating considerable debt in that process. They will understandably exercise whatever it takes to be successful on these non-competency-based examinations.

Currently, virtually all radiology residents will participate in the aggregation and dissemination of “recalls” of questions that are on old examinations, a significant number of which will be repeated year over year. The current rubric neither establishes competency nor proves mastery of a defined curriculum.
The system is simply not what it should be for the faculty, the trainee and most of all for the patients. Rather than evaluating competency, the testing rubrics become a “rite of passage” dictated by organized radiology and licensing authorities. The latter authorities are certainly an indispensable part of our profession.

As educators, we must help those authorities improve on the product delivered to the public under our professional banner. We have the tools. All we need is the will and effort as educators to make these improvements as soon as possible.
It is time to systematically eliminate those errors by developing specific mechanisms of curriculum delivery aimed at eliminating both observational and interpretive errors.

Furthermore, our systems must embrace and promote those behaviors, which emphasize the appropriate role for diagnostic imaging interpretation interaction with our referring colleagues and patients, including but not limited to a well-crafted, grammatically correct and fully understandable report. This is our actual work product as diagnostic radiologists and testing these skills should be how we establish competency.
Dr. Garland told us 70 years ago our error rate was too high (2). Dr. Weed told us 50 years ago that facts are a commodity and we needed to educate physicians to think critically and to provide them with smart IT systems that can move them along the most fruitful pathway in medical decision-making (4–8).

What progress have we made in response to this excellent advice by two educational pioneers? Clearly not enough progress when reflecting on the lack of improvement in our making potentially significant mistakes.
Our next step in diagnostic imaging education should be to disseminate an evaluation rubric that allows us to assure our patients that we are competent.

This condition is not satisfied by the current state of, to some extent, arbitrary and haphazard exposure to specific clinical scenarios in an academic clinical practice that supports the training environment during residency and fellowship training. It is also not satisfied by current methodologies of board examinations that really do not effectively evaluate whether the trainees seeking certification can effectively engage in advanced critical thinking at an expert or even competent/proficient level.
Modern expressions of competency-based training suggest that a near expert (“proficient”) level of competency should be attained by the end of the training and that true expert level of professional behavior be attained relatively soon after the completion of formal training (Figure 3B-1).

To assure that this occurs, the current methodology of both medical school and postgraduate medical residency training must progress to a fundamentally competency-based curriculum, and effective and fair evaluation rubrics that do better than currently available tools to assure competency, proficiency and mastery (9). We must decisively lower what is currently an unacceptable high rate of error in order to truly claim general competency in diagnostic imaging interpretation.

I am on a mission to modernize post graduate medical education. With my team at the University of Florida, we have spent the last eight years developing a competency based curriculum and evaluation for radiology, based on modern learning theory. In this essay series, Moving Towards Modern Medical Education and Training, I examine in detail the pathway to modern learning and educational theory, and the outcome of the application of modern learning principles in this sphere of medical education.

Part 1: Medical Education: How Did we Arrive at the Current State

Part 2: “See one do one teach one”

Part 3: Teaching to the Test

Part 4: Competency or Passing the Boards? Every patient wants an expert.

Part 5: Errors and “Experts”


1- Stephen Waite, Jinel Scott, Brian Gale, Travis Fuchs, Srinivas Kolla and Deborah Reede: Interpretive Error in Radiology AJR 2017; 208:739–749
2- Garland, in groundbreaking work in 1949 Garland LH. On the scientific evaluation of diagnostic procedures. Radiology 1949; 52:309–328 9. Berlin L. Accuracy of diagnostic procedures: has it improved over the past five decades? AJR 2007; 188
3- Berlin L. Accuracy of diagnostic procedures: has it improved over the past five decades? AJR 2007; 188:1173–1178
4-Weed, L. L. (1964–06–01). “MEDICAL RECORDS, PATIENT CARE, AND MEDICAL EDUCATION”. Irish Journal of Medical Science.
462: 271–282.
5-Weed, L. L. (1968–03–14). “Medical records that guide and teach”. The New England Journal of Medicine. 278 (11): 593–600. doi:10.1056/NEJM196803142781105. ISSN 0028–4793. PMID 5637758.
6-Weed, L. L. (1968–03–21). “Medical records that guide and teach”. The New England Journal of Medicine. 278 (12): 652–657 concl. doi:10.1056/NEJM196803212781204. ISSN 0028–4793. PMID 5637250.
7-Weed LL. Medical records, medical education, and patient care: the Problem-Oriented Medical Record as a basic tool. 1970. Cleveland (OH): Press of Case Western Reserve University.
8-Jacobs L. Interview with Lawrence Weed, MD — the father of the problem-oriented medical record looks ahead [editorial]. Perm J 2009 Summer; 
- Education-Creating the Modern Medical School.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.