Something that’s come up in a lot of discussions I’ve had over the last year or two has been about whether AI will replace doctors. So I’ve spent the last few weeks thinking about this and trying to get my thoughts down in writing.
I’m going to share the resulting reflection as a two-part reflection (it’s long-ish) this week and next week, before going back to the “Should I be a content creator? If so, how?” series that I was writing on before.
A big thanks to all those who have discussed this topic with me, and reviewed versions of this post.
As usual, helpful links at the bottom of the email.
Some thoughts on whether AI will replace doctors
A couple of weeks ago, a group of researchers and I published an article in the British Medical Journal reviewing research papers that reported AI algorithms and compared their performance with clinicians. Our main findings in that study were that the methodology of these studies leaves them prone to bias and that they often have over-stated conclusions.
I think this is an interesting springboard to consider the wider subject of what role AI is likely to play in medicine, and to consider the popular and slightly controversial question of “Will AI replace doctors?”
Here, I’m going to make the argument that: (i) AI will inevitably be able to automate some of the processes currently performed by doctors, (ii) that this will initially start in very narrow domains and gradually expand, (iii) some areas are more challenging to automate than others while other areas are out of reach entirely and (iv) they won’t replace doctors, because we won’t let them.
Note: I wouldn’t consider myself an authority on this subject, and I’m open to my opinion changing over time. This article is a reflection of my current view (as of May 2020).
Inevitable replacement of certain tasks
Within a typical day, a doctor wears many hats. Tasks can vary broadly, from eliciting information from patients, both verbally and through examination, to interpreting test results; from making diagnoses to formulating treatment plans; and from communicating with other members of the medical team to breaking bad news to relatives.
If we break down the tasks that doctors perform, there are inevitably going to be some that are simpler and more algorithmic.
Some tasks could be automated even without the need for machine learning algorithms. For example, dosing certain drugs (like warfarin) or correcting salt levels in the blood can typically be done in a rule-based way. For example, if the potassium level is below 3.5 (or in some cases, 4) and there aren’t extenuating circumstances (such as a relevant drug or past medical condition), then the potassium will be replaced in a protocolised way. There are many decisions such as these made by doctors and nurses, by following such a schema.
So why haven’t these tasks been automated already? At present, I believe the main reasons are pragmatic. Firstly, many hospitals don’t have the required information available in the same place digitally, and those that do have only done so relatively recently. Secondly, the responsibility must lie with someone, and having a doctor or nurse as the final sign-off on, let’s say, a prescription of potassium, provides a clear responsibility. And finally, I believe there’s a culture element; doing so would represent ceding a degree of control, and there are likely fears of the ‘slippery slope’.
I believe it is only a matter of time before these tasks because automated or near-automated (perhaps requiring a doctor/nurse sign-off), as we shift towards increasing amount of the workflow being performed in an integrated and digital way.
Then there are also more complex tasks, which may involve greater integration of information and more complex decision-making, but ultimately still involve a fair amount of algorithmic, pattern-recognition. Things like spotting a diagnosis on a pathology slide or from an X-ray. There are nuances to such tasks, but given ML’s strength in pattern-recognition, many such specific tasks are likely to be within the technical reach of ML algorithms.
The question then, is what is the limit to what AI is able to achieve? This is yet to be known, and will ultimately be borne out with time and scientific investigation. I’m confident that deep learning-based AI will not be able to do all the tasks that doctors currently can. The follow-on question is; what will we allow AI to do? I will expand on both questions in this article.
Expanding from narrow domains
The majority of health uses of AI currently being investigated are ‘narrow’ tasks; that is, they are looking to perform a singular, well-contained task. Is this image of a skin lesion benign or cancerous? Does this retinal scan show mild, moderate or severe retinopathy?
There is emerging evidence (although still early-stage) that AI can perform at a comparable level to human doctors in some of these tasks, and do so much faster (this was the main line of investigation of our study).
However, moving beyond these ‘narrow’ tasks becomes much more complex quite quickly. Based on this patient’s non-verbal cues, speech content and medication history, is he having a relapse of his psychosis? Given this uncommon combination of two diseases, what is the appropriate balance of treatment to give? Creating pipelines for taking in these different sources of information, appropriately balancing and then integrating them represents a challenge. Creating an AI to solve these tasks may not be impossible, but it will take more time. Approaches to use combinations of different data types are being attempted, but so far typically haven’t proven very successful. For example, IBM’s Watson for Ocnology aimed to integrate all available cancer patient data and disease information to guide diagnosis and therapy decisions, but was stopped early for “not meeting its goals”.
TO BE CONTINUED IN NEXT WEEK’S EMAIL…
On the subject of learning medicine, I shared my thoughts on (1) learning on the wards, (2) preparing for OSCEs, (3) preparing for written exams, (4) being productive in medical school, (5) work-life balance and (6) taking notes in lectures
I absolutely loved being interviewed for this podcast, and I think Mustafa has done a fantastic job with the final product. We talked about key definitions (AI vs machine learning vs deep learning), the current state of AI in healthcare research, whether AI will replace doctors, how to read medical ML papers and how to get involved.
After reading a lot of articles an about AI in healthcare, this was one of the better ones - worth a read if interested by the contents of this week’s newsletter.
For those looking for ways to help out with the on-going COVID-19 pandemic, the 80,000 hours website have collated an extensive list of companies and research groups looking for support, and sources of potential funding.
There’s stack of over useful information on the website on the subject of careers and maximising positive impact, so in my opinion is highly worth a peruse!
Thanks everything for this week - stay happy and healthy!