top of page
Search

The Human Touch in a Digital Age: Preserving Skill in AI-Driven Medicine

  • malihaybhat
  • 8 hours ago
  • 4 min read
ree



As artificial intelligence becomes more deeply embedded in clinical practice, the medical world faces an important question: how do we maintain skills when machines can do so much for us? The article “Preserving clinical skills in the age of AI assistance” from The Lancet explores this concern, warning of the potential for de-skilling (a gradual loss of hands-on expertise) as AI begins to assist, and sometimes replace, tasks once performed by physicians. In this post, I’ll break down what the article discusses, explain the model graph it presents, and share my own thoughts on how AI could shape the future of medicine - I hope you enjoy!



What the Article Discusses


The article focuses on how AI support, though helpful, might cause physicians to lose proficiency in routine but critical skills. For example, a study found that gastroenterologists who relied on AI to detect colon polyps in endoscopies actually performed worse when the AI system was later turned off. This suggests that while AI can boost short-term efficiency, it may erode long-term ability and confidence.


The authors call this phenomenon de-skilling, where reliance on technology leads to a decline in human expertise. Related effects include mis-skilling (adopting errors learned from AI) and never-skilling (failing to learn skills at all because AI takes over too early in training). They note that this problem extends beyond endoscopy - radiology, dermatology, and pathology are all at risk as AI grows better at analyzing scans, images, and lab data.


To address this, the article suggests several ways to preserve and strengthen clinical skills alongside AI use. One key idea is to intentionally design “AI-off” or “AI-delay” intervals during training and practice, so physicians periodically rely only on their own judgment and reasoning. This helps maintain vigilance and critical thinking. Another recommendation is to have AI interpretations appear only after a clinician’s own findings are recorded, ensuring that human assessment comes first rather than being influenced by the algorithm.


The authors also emphasize drawing clearer boundaries for AI use. While AI excels at rule-based or routine tasks, clinicians should focus more on contextual, complex, and high-stakes decision-making. Continuous education, simulation-based practice, and deliberate skill maintenance sessions are also proposed to counter de-skilling. Ultimately, the article concludes that how medical professionals and institutions integrate AI (through thoughtful design, training, and workflow choices) will determine whether AI strengthens or weakens clinical expertise over time.



The Graph Explained


Graph demonstrating skill trajectories of medical professionals in relation to AI
Graph demonstrating skill trajectories of medical professionals in relation to AI

This graph was created by the authors of this article, and it essentially illustrates the skill trajectories of medical professionals with AI.


As shown, the skill of physicians declines after constant use of AI (deskilling), as they gradually rely more on automated systems rather than their own judgment. With AI assistance, performance appears higher overall, but this can mask an underlying loss of independent ability. Once the AI is removed, the gap between assisted and unassisted performance becomes clear - doctors who depend on AI often struggle to perform tasks they once mastered on their own.


There’s also another trend: physicians who train with AI from the beginning (“never-skilling”). These individuals may never fully develop the same level of independent expertise as those who trained without technological support. Instead, they become proficient only in working with AI, not without it.


The graph ultimately highlights a deeper question about the future of medicine: Are we comfortable trading a portion of human skill for greater reliance on machines? Some may see AI as a natural evolution of medicine, while others view it as a slow erosion of the art and intuition that define clinical practice. You might look at this graph and ultimately decide AI-enhanced practice is overall more beneficial despite the loss of actual human skill, but you also might better value the importance of maintaining skill over becoming dependent on AI. The question is, which path are we more comfortable taking as a society?



My Take: Ethics, Responsibility, & Balance


AI’s potential in medicine is incredible. It can catch early signs of disease, improve accuracy, and save time. But in my opinion, and I would argue ethically, doctors owe it to their patients to maintain skill, judgment, and intuition - not to hand those over to an algorithm. There’s a difference between using AI to make yourself better rather than letting it take away from your competence.


If doctors lean too heavily on AI, it risks eroding not only their technical abilities but also their sense of accountability. Patients don’t come to a machine for care, they come to a human being who can reason, empathize, and explain. That human element is something AI can’t replace.


At the same time, we shouldn’t see AI as the enemy. It’s a tool, not a threat. That is as long as it’s used carefully and consciously. Doctors, students, and researchers should still do their own work, think critically, and make decisions before turning to AI for confirmation. It’s about keeping your brain sharp, not outsourcing your thinking.


Society also needs to tread carefully. As AI gets stronger, there’s a real possibility of job loss or role replacement, with machines doing the work humans trained years to do. That’s why people in medicine (and beyond) shouldn’t fear AI, but they should be cautious and proactive. All in all, we as a society can’t let technology become a substitute for skill, curiosity, and care.


So as we move further into an AI-powered era, we have to ask ourselves: Will we still value human judgment when algorithms seem faster and more precise? What happens to empathy and accountability when machines become part of every diagnosis?And perhaps most importantly, if we let technology take over our thinking, what parts of being human do we risk losing in the process?


 
 
 
bottom of page