With the release of ChatGPT, generative AI exploded into public consciousness and became one of the fastest growing apps of all time with 100+ million users. While there is certainly plenty of hype, there is also real substance, and there have already been very real and dramatic shifts in many industries, including healthcare.
The applications of generative AI appear endless, from text generation and editing to writing code and debugging software. Generative AI has the potential not only to find and present information in a user-friendly way, but also to generate answers to questions that have never been asked. Creativity and innovation have always been uniquely human, but now technology, and generative AI in particular, is poised to fundamentally change the way that work is done. Take, for example, how generative AI can handle writing tasks. Within seconds you can create a long detailed text exploring an esoteric topic, or distill complex basic science research papers into simple summaries. These tasks could very well have taken a human hours to research and synthesize. You can ask ChatGPT to write a story in a particular style or create a song in a specific genre. Here it's not just summarizing, the technology is creating something novel and unique.
Medicine and healthcare are not immune to the impact of this technological shift. From research and education, front line clinical care, accelerating drug development, and the discovery or delivery of precision medicine therapeutics, the applications of generative AI are broad and already being felt. AI helped Moderna develop their COVID vaccine in record time. Some large language models(LLMs) have even passed the United States Medical Licensing Examination (USMLE) exams needed for medical licensure in the U.S. Many are touting the potential to revolutionize healthcare as we know it, and a myriad of health AI startups have burst into existence on this very premise.
At the same time, there are very real and concrete concerns. The stakes in healthcare are high, and there is no margin for error. While it might be embarrassing to make an error in a research paper or vaguely disturbing to have a chatbot insult you – in medicine there is the very real danger of causing bodily harm. Bias in the training of LLMs can propagate disparities and discrimination. The technology today is also limited by consistency and the distressing tendency to produce very different outputs with exactly the same inputs. The very tool that holds the promise of democratizing healthcare could in fact do the opposite. And the potential to cause harm with hallucinations and errors cannot be understated.
Then again, to simply deny the use of AI in healthcare is a huge disservice to patients and clinicians alike. The potential value is tremendous. We need to appreciate the limitations of generative AI to mitigate the risks and to fully realize the benefits. While AI does not recognize cause and effect, it can certainly establish correlations and connections. The human role is to establish the how and why. And just as important as the underlying technology, humans are responsible for understanding the limitation of technology and using that to guide appropriate use.
At Suki I’ve seen firsthand how this powerful technology can profoundly change a clinician’s experience. Today physicians spend up to 2 hours in the medical record for each hour they spend with patients. Generative AI can take on part of the administrative burden of documenting patient encounters by summarizing the clinician patient encounter, freeing the provider to actually connect with their patients. We already hear from users that the cognitive burden of documentation is decreased, they are able to finish notes faster and in greater detail, and that their patients hear and feel the difference.
But just as important as recognizing the potential of generative AI, is the need to be cautious and understand the limitations. One of the first key skills I press my house staff to learn is how to spot “sick” from “not sick” when triaging patients. AI doesn’t have common sense or intuition. While models can attempt to predict patients at risk of decompensation, they are no replacement for the eyes of a trained clinician. Generative AI in clinical medicine today is best thought of as augmenting the clinician, letting us practice at the top of our license. It is best applied to tasks that the clinician undertakes but that don’t require the skill level of that provider. The power is in allowing the physician to focus on those tasks that do require their skill level – be it performing procedures, interpreting complex diagnostics, putting the whole picture together in a coherent diagnosis and plan, or providing the human touch for patients. The value of generative AI, as it exists today, is in allowing the clinician to focus their attention and mental energy on the how and the why of caring for patients.
November 20, 2024
MEDENT, Azalea Health join Suki strategic partners athenahealth and MEDITECH, adopting AI Capabilities to Improve User ...