Opinion: AI in Medical Training
- Alyssa Kyle
- Sep 16
- 3 min read
Updated: Sep 21
Without a doubt, artificial intelligence is the future. In every industry, people are scrambling to find ways to get ahead of this massive wave. In medicine, AI has the potential to enhance learning and reduce burnout. The goal isn’t to replace clinical thinking, but to use AI as a tool that supports and expands it. As we are training the doctors of tomorrow, how can we stay current with the times and utilize this resource without faltering the skills needed to safely serve patients?

As medicine continues to evolve, so too must the way we train future physicians. Artificial intelligence is no longer a distant innovation. It is already here right now making an impact on the world around us. It is slowly being woven into the systems that shape our clinical practice. As a resident physician, I find myself asking a critical question. If AI is a tool that I’ll use throughout my career, shouldn’t I be training alongside it right now?
From my perspective, the answer leans toward yes with important caveats. I believe medical education has a responsibility to integrate AI into our training, but we must approach this integration thoughtfully, deliberately, and with boundaries that preserve the integrity of our clinical development.
Right now, some of the most practical applications of AI in clinical settings are in documentation. This includes drafting encounter notes, pulling forward lab results, and writing patient instructions. You can use all of the dot phrases you’d like, but the efficiency of AI is superior. This value can reduce the administrative burden and give us back the time to focus on what we enjoy most: clinical reasoning, human connection, and patient care. All of which are the reasons we went into medicine in the first place. This idea raises an important question. Could the integration of AI help reduce burnout among physicians?
Beyond documentation, tools like Open Evidence are showing promise. They can help generate differential diagnoses, synthesize research, suggest diagnostic pathways, and even prompt broader clinical thinking. Used wisely, I see these tools as intellectual expanders. Especially for those of us still in training. The truth is, as trainees, our thinking is often shaped by the scope of our institution, mentors and peers. But what if AI could help us think more broadly? What if its ability to suggest "zebra" diagnoses actually sparks deeper, more thought-provoking conversation among care teams and elevates our training experience? And with the time saved on documentation we redirect our efforts toward more impactful discussions and clinical exercises.
The truth is, as trainees, our thinking is often shaped by the scope of our institution, mentors and peers. But what if AI could help us think more broadly? What if its ability to suggest "zebra" diagnoses actually sparks deeper, more thought-provoking conversation among care teams and elevates our training experience?
Still, I recognize there’s an important tension that exists. Where do we draw the line? If we rely too heavily on AI, we risk allowing it to think for us rather than with us. That is a danger I take very seriously. Just like calculators didn’t eliminate the need to understand math, AI should never replace the foundational process of learning how to think, reason, and make decisions as a physician.
This is why I believe we need protocols. Maybe that means requiring trainees to submit their differential diagnoses before consulting AI-generated suggestions. Maybe it means ensuring AI-generated notes are revised through a protocolized process before being accepted into the chart. Or maybe it’s time for accrediting bodies like the ACGME to step in and develop national standards for how AI is used in training. Whatever the approach, the end goal is clear. AI should be used as a companion not as a crutch.
Medicine will always require adaptability. Those of us training now will soon be practicing in a world where AI is integral. Learning to work alongside these tools during training will make us better prepared for what lies ahead. But just as important is protecting the discipline of independent thought and clinical reasoning.
I believe this moment presents a unique opportunity. We have the chance to become a generation of physicians who are not only fluent in the language of medicine, but also fluent in the innovative tools that will define its future. More importantly, we have a role to play in shaping how AI is integrated into practice. We can help lay the groundwork for its safe, effective, and ethical use, enhancing our profession rather than complicating it.
AI holds the potential to improve the healthcare experience for both providers and patients. From where I stand as a trainee, this feels like a pivotal moment in our profession’s history. How we choose to approach the future of AI in medicine won’t just define my training, it will define the trajectory of the field for years to come.


Comments