Artificial Intelligence and Medical Practice: Between Technological Promise and Cognitive Responsibility

Diego Javier Martínez

Artificial Intelligence and Medical Practice: Between Technological Promise and Cognitive Responsibility

Universitas Médica, vol. 66, 2025

Pontificia Universidad Javeriana

Diego Javier Martínez

Research Center in Rheumatology and Medical Specialties (CIREEM) and Autoimmunity Unit of Colsanitas Medical Centers, Torre Especialistas. Bogotá, Colombia


Artificial intelligence (AI) has been rapidly incorporated into medical practice, not as a peripheral innovation but as an element that is reshaping clinical, educational, and organizational processes. Like earlier transformations—such as the computerization of the medical record or the expansion of evidence-based medicine—its impact extends beyond the technical domain and compels a reexamination of how clinical judgment is constructed and how professional responsibility is exercised (1,2).

In everyday practice, AI systems already participate in core tasks: information synthesis, support for differential diagnosis, therapeutic planning, clinical documentation, and medical communication. The literature shows that, in specific domains, these systems can achieve levels of performance comparable to those of experienced professionals, particularly in tasks involving pattern recognition, information retrieval, and probabilistic reasoning (3,4). However, this capability coexists with structural limitations: opacity in inference processes, incorporation of preexisting biases, and the generation of persuasive responses even when they are incomplete or incorrect, especially in settings of clinical uncertainty (2,5).

This contrast presents a central tension for contemporary medicine. The fluidity and speed of AI may induce a progressive delegation of complex cognitive functions, with the risk of eroding fundamental skills in medical practice. In both educational and healthcare settings, phenomena such as excessive cognitive outsourcing, the progressive loss of previously acquired competencies, the failure to develop essential clinical skills, and the reinforcement of erroneous reasoning by automated systems have been described (2). In this regard, AI does not correct clinical reasoning; it amplifies it, for better or worse, depending on the quality of the human judgment that guides it.

Therefore, it is more appropriate to position AI within a model of augmented intelligence, in which technology supports—but does not replace—medical reasoning. In an environment where information is immediate and abundant, the distinctive value of the professional is no longer the ability to generate answers but resides in the appropriate formulation of problems, the identification of implicit assumptions, the recognition of uncertainty, and the critical evaluation of available evidence. Clinical judgment, understood as a deliberate synthesis of knowledge, experience, and context, remains an indispensable human act (1,6).

Moreover, there is a relevant generational dimension. Physicians trained before the full digitization of medicine developed their practice in contexts where information was limited, doubt was explicit, and reasoning had to be constructed step by step. Having transitioned from the analog era, through the expansion of digital knowledge, to the advent of AI, offers an integrative perspective that allows for understanding both the transformative potential and the risks of these technologies.

The central challenge, therefore, is not technological but educational and ethical. The responsible adoption of AI requires literacy in its basic principles, understanding of its limitations, and an explicit reaffirmation of critical thinking as the core of medical practice. Recent models of clinical supervision propose structured frameworks that promote independent verification, metacognitive reflection, and adaptation of the degree of confidence according to clinical risk, reinforcing the idea that AI should inform reasoning but not replace it (2).

In regions such as Latin America, characterized by high healthcare burdens, fragmented health systems, and persistent structural limitations, AI represents a real opportunity to optimize processes and reduce historical access gaps. However, its implementation without a critical and contextualized framework could amplify existing inequities or create new forms of technological dependence. Regional evidence highlights that the sustainable development of AI in healthcare depends less on isolated technical advancements and more on the creation of integrated ecosystems that connect training, research, clinical practice, and ethical regulation, tailored to local realities (7).

Ultimately, the question is not whether AI will transform medicine—that process is already underway—but what type of medical practice will emerge from that transformation. Medicine supported by intelligent systems but guided by professionals capable of thinking critically, assuming responsibility, and preserving the centrality of the patient has the potential to be safer, more equitable, and more humane. The alternative, a medicine dazzled by technological fluidity and devoid of deliberation, risks losing what has historically defined it: clinical judgment in service of care.

References

1.Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. https://doi.org/10.1038/s41591-018-0300-7

2. Abdulnour R-EE, Gin B, Boscardin CK. Educational strategies for clinical supervision of artificial intelligence use. N Engl J Med. 2025;393(8):786-97. https://doi.org/10.1056/NEJMra2503232

3. Rajpurkar P, Lungren MP. The current and future state of AI interpretation of medical images. N Engl J Med. 2023;388(21):1981-90. https://doi.org/10.1056/NEJMra2301725

4. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28(1):31-38. https://doi.org/10.1038/s41591-021-01614-0

5. Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence deployment in healthcare practices. Artif Intell Med. 2024;151:102861. https://doi.org/10.1016/j.artmed.2024.102861

6. Parikh RB, Helmchen LA. Paying for artificial intelligence in medicine. NPJ Digit Med. 2022;5(1):63. https://doi.org/10.1038/s41746-022-00609-6

7. Yepes-Barreto I, Martínez L, Girala M, Sánchez-Santos R, Graz F, Restrepo JC, et al. Barreras y oportunidades para la investigación en inteligencia artificial aplicada a la salud en América Latina. Hepatología. 2026;7(1):32-43. https://doi.org/10.59093/27112330.164

Contexto
Descargar
Todas