Will AI Docs Be MDs, or DOs?

Kim Bellard
Tincture

--

There is increasing acceptance that artificial intelligence (A.I.) is going to play a major role in healthcare and in the practice of medicine.

Some see AI as a way to augment human doctors. Some see it as a way to help patients triage the need to see a human doctor. Others see it replacing entire specialties (pathology is often cited). A few even think that, eventually, AI “doctors” could replace humans entirely.

Whatever is going to happen, we need to be thinking about how AI makes its decisions — and what that might say about our existing system.

AI decision-making has two separate but very much overlapping problems: the “black box” problem and unintentional biases.

The black box problem is that, as AI gets smarter and smarter, we’ll lose track of what it is doing. “Machine learning” refers to the ability of AI to, essentially, learn on its own. It looks for patterns we might only not have seen, but might not even be able to see. It may reach conclusions using a logic that is beyond us.

My favorite example of this, as I have previously written about, is Alphabet’s Deepmind program AlphaGo Zero. It learned how to play the fiendishly complex game Go, without humans programming it to play, or even by learning from human games. It not only mastered the game — in three days — but also came up with strategies that left human Go experts agog.

Think about the day when the AI’s strategies are not about a game but about treatments for our health.

If we were convinced AI was always making purely objective decisions, we might grow to accept its decisions without question. After all, most of us don’t know how our televisions or smartphones work either; as long as they do what we expect them to, we don’t really care how.

The trouble is that we’re becoming aware that all-too-human biases can be built into AI.

Programming remains a largely male profession, and those men are usually young, well educated, and from comfortable backgrounds. It is not a good representation of the world. Their world views and experiences influence the way they program, and the data that they give to their programs to help them learn. In most cases, they’re not intentionally biasing their creations, but unintentional biases can have the same result.

For example, a recent study from MIT and Stanford on facial recognition AI programs found that they worked very well for faces that were white males; otherwise, not so much. The researchers discovered that the dataset of one such program was 77% male and 83% white. The programs were from major technology companies, and certainly weren’t intended to be biased, but the AI knew best what it knew most.

This is not the only such example. As Kriti Sharma points out that the gender stereotyping in having default voice for digital assistants Siri or Alexa be female, but using male names for problem-solving AIs IBM’s Watson and Salesforce’s Einstein. ProPublica found that AI used by judges during sentencing to predict the likelihood of future crimes greatly overestimated the likelihood for African-Americans.

The key to both problems may be to increase transparency about what the “black box” is doing. The Next Web reported that researchers recently “taught” AI to justify its reasoning and point to supporting evidence. Whether we’ll have the time, interest,or expertise to examine these justifications remains to be seen.

Vijay Pande, a general partner at Andreessen Horowitz, isn’t so worried, and he specifically points to health care as a reason why:

A.I. is no less transparent than the way in which doctors have always worked — and in many cases it represents an improvement, augmenting what hospitals can do for patients and the entire health care system. After all, the black box in A.I. isn’t a new problem due to new tech: Human intelligence itself is — and always has been — a black box.

Scott Adams, Dilbert

We don’t really know how physicians make their decisions now, which may account for why physicians’ practice patterns are so varied. Many supposed rational decisions, in healthcare and elsewhere, are based on a variety of factors, most of which we are not consciously aware of and some of which are more instinctive or emotional than intellectual.

Mr. Pande views the so-called black box as a feature, not a bug, of AI, because at least we have a chance of understanding it, unlike the human mind.

All of which led me to a thought experiment: as we program healthcare AI, would we want it to be based on allopathic (M.D.) or osteopathic (D.O.) practices?

These are the two major schools of modern medicine. Their training and licensure have become much more similar over time, but the two remain distinct branches, with separate medical schools and graduate medical education. Most hospitals are “integrated” but there remain predominately D.O. hospitals.

If you asked either type of physician if the healthcare AI of the future should be based solely on their own branch, I suspect most would find that acceptable, but not if based solely on the other’s. If you asked if it should be based on both, using all available information, I suspect that would be even more acceptable.

Therein lies the problem: if we don’t want our AIs to be either “M.D.” or “D.O.,” but rather a combination of the best of both, then why don’t we want the same of our human doctors? Why do we still have both?

IBM’s CTO Rob High spoke to TechCrunch of the AI work they’ve done with Sloane Kettering Cancer Center, and admitted the resulting AI has their biases and philosophy. He says “any system that is going to be used outside of Sloane Kettering needs to carry that same philosophy forward.”

Whether it is Sloan Kettering, The Mayo Clinic, or The Cleveland Clinic — or M.D. versus D.O. — we should want AI based on as much data as possible. We don’t yet really know what is important, and should not make the same mistakes with silos as we’ve made before AI.

I don’t want my healthcare AI to be either an M.D. or a D.O. I don’t want it to be a physician at all. I want it to be something new. If we want the healthcare system of the future to be an improvement over what we have now, we need to stop thinking within our current paradigms.

Follow Kim on Medium or on Twitter (@kimbbellard)!

--

--

Curious about many things, some of which I write about — usually health care, innovation, technology, or public policy. Never stop asking “why” or “why not”!