Artificial intelligence is rapidly transforming clinical decision-making, healthcare professional education, and overall care delivery. Yet a recent international review highlights a crucial point: as AI tools become more integral, horizontal skills (soft skills) such as empathy, communication, critical thinking, and ethical judgment are more important than ever.
These competencies enable healthcare professionals to interpret algorithmic outputs, maintain patient trust, ensure safe care, and integrate technology responsibly into clinical practice. Understanding and fostering these skills is essential for building a healthcare workforce that remains human-centered in an AI-driven era.

The review is authored solely by Effie Simou, Associate Professor of Communication and Media in Public Health at the University of West Attica, and examines the role of horizontal skills in the modern, technology-enhanced healthcare environment. By synthesizing international literature, it analyzes how professional responsibility, teamwork, cultural sensitivity, and ethical judgment impact care quality and safety in an era where artificial intelligence plays an ever more active role.
A “Skills Ecosystem” Rather Than a List of Competencies
The review does not treat these skills as isolated capabilities but as an interdependent ecosystem. Communication strengthens trust, empathy improves patient adherence, critical thinking acts as a counterbalance to uncritical acceptance of algorithmic recommendations, while professionalism and ethical vigilance ensure accountability.
The key argument is clear: technological progress does not make horizontal skills less important—it makes them essential. In an environment where algorithms support diagnosis, predict risks, or suggest treatment strategies, healthcare professionals are called upon to interpret, evaluate, and ultimately take responsibility for the decisions made.
Human Judgment Is Not Replaced. It Is Strengthened—and Tested.
Artificial Intelligence in Medicine and the Risks of Uncritical Trust in Technology
Particular emphasis is placed on the risks arising from excessive trust in AI systems. The so-called “automation bias” can lead to reduced critical vigilance, while the complexity of algorithms may hinder understanding of how their recommendations are generated.
Large language models and AI tools can also produce inaccurate or incomplete information persuasively. In a clinical setting, such misleading certainty can have serious consequences.
In this context, artificial intelligence in medicine cannot function autonomously; it requires active and critical human oversight. Critical thinking, transparency, and clear responsibility allocation become absolutely necessary. The final decision regarding patient care cannot be delegated to an algorithm. Responsibility remains human—a fundamental principle for the ethics of medical practice.
The Patient Relationship Dimension
Technology can enhance accuracy, accelerate processes, and support the management of large data volumes. However, it cannot replace the therapeutic relationship. Trust, clear communication, active listening, and recognition of patients’ emotional needs are elements that cannot be coded.
Especially in fields like oncology, where decisions are complex and the emotional burden high, the quality of communication can significantly influence treatment adherence, understanding of options, and overall care experience.
The review emphasizes that human presence is not supplementary to technology—it is the axis around which it must be organized.
Toward a Balanced Human–Algorithm Coexistence
The study’s conclusion is clear: health professional education must integrate both technological literacy and systematic development of horizontal skills. Knowing how an AI tool works is not enough; one must be able to critically evaluate it, explain its limitations, and integrate it responsibly into clinical practice.
In an era where algorithms are becoming increasingly “intelligent,” empathy, responsibility, and critical thinking are not secondary skills—they are the core of a medical practice that seeks to remain human, reliable, and safe.
Ultimately, the question is not whether artificial intelligence will be integrated into medicine—it already has been. The real challenge is under what terms it will coexist with the human factor. As technology advances at geometric speed, education must equally invest in skills that cannot be automated: the ability to listen, to question, and to take responsibility.
The challenge for artificial intelligence in medicine is not only technological but deeply pedagogical and ethical. Perhaps the greatest stake is not to create “smarter” systems but more conscious professionals. As algorithms evolve, so does the need for medicine to remain fundamentally human.
Source
The Growing Importance of Soft Skills in Medical Education in the AI Era, MDPI, 2024.
Available at: https://www.mdpi.com/2813-141X/4/4/50
Text/adaptation: Ifiyenia Anastasiou for Kapa3
