This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
The integration of Artificial Intelligence (AI) in the healthcare sector represents a major transformation that raises complex issues concerning the technical, organizational, and relational dimensions of the profession. This contribution analyzes the key success factors and obstacles to the implementation of AI in healthcare facilities, adopting a holistic perspective that emphasizes communicational and interprofessional dynamics. Through an analysis of recent literature, we identify promising approaches and necessary conditions for successful AI integration, while preserving the quality of caregiver-patient relationships and interactions among professionals. Our analysis reveals that successful implementation relies on a balanced combination of leadership, end-user training, support for teams in modifying their daily work, and ethical governance. The central role of communication in this transformation process is emphasized, both in change management and in the adaptation of professional practices. We propose recommendations that integrate these different dimensions to guide the development and implementation of AI projects in healthcare, focusing on preserving and enriching human relationships within healthcare organizations.
The emergence of Artificial Intelligence (AI) in healthcare is progressively transforming not only medical practices but also relational and communicational dynamics within healthcare organizations. Technological advances, particularly in the field of language models (LLMs) and generative AI, open new perspectives for improving quality of care, optimizing clinical processes, and enriching interactions between different actors in the healthcare system (Reddy, 2024; Khera et al., 2023). However, the integration of these technologies in care environments proves complex in human and organizational terms and requires a methodical approach to maximize benefits while minimizing risks. This transformation necessitates opening the "technological black boxes" to understand their actual functioning in daily practices (Gaglio & Mathieu-Fritz, 2024).
The digital transformation of the healthcare sector moreover occurs in a context where quality of care, deeply rooted in the caregiver-patient relationship, must remain a central concern. Despite AI's considerable potential, several recent studies indeed highlight significant obstacles to its adoption. The lack of training for healthcare professionals, fears related to potential replacement by AI, and relational and ethical issues constitute major barriers (Busch et al., 2024; Allam et al., 2024). Furthermore, the diversity of sociotechnical configurations and the need for a transdisciplinary approach to understand these issues requires analyzing concrete and varied implementation cases (Gaglio & Mathieu-Fritz, 2024) to grasp the phenomena at work.
This contribution aims to adopt a holistic approach and analyze the necessary conditions for successful implementation of AI in healthcare, drawing on the latest research advances and documented experience. More specifically, this article pursues three objectives:
1. Identify and analyze the main barriers to AI adoption in the healthcare sector, paying particular attention to relational and communicational aspects
2. Examine key success factors for successful implementation, particularly in their human and organizational dimensions
3. Propose an integrated methodological framework to guide AI projects in healthcare, with an emphasis on preserving and enhancing interprofessional relationships
This analysis is based on a thorough review of recent literature and adopts a multidimensional perspective, considering both technical aspects and human and organizational dimensions. The remainder of this article is organized as follows: we first present the state of the art and conceptual framework of AI in healthcare, then analyze the barriers to its adoption. We then examine key success factors before proposing concrete recommendations for successful implementation.
2. State of the Art and Conceptual Framework
The integration of AI in healthcare is part of a broader transformation of communication modes and interaction between different actors in the healthcare system. This evolution requires a deep understanding of the context and specific issues in the sector.
2.1 AI in Healthcare: Definition and Communicational Scope
AI in healthcare is not limited to a simple technological tool; it represents a new paradigm of communication and interaction throughout the healthcare ecosystem. AI systems transform the way healthcare professionals communicate, make decisions, and interact with patients (Reddy, 2024). This transformation affects several dimensions of communication: clinical communication between healthcare professionals during multidisciplinary meetings, pedagogical communication in healthcare professional training, interprofessional communication between different healthcare system stakeholders, and finally patient-caregiver communication in the therapeutic relationship.
2.2 Impact on Quality of Care and Professional Interactions
Quality of care revolves around fundamental attributes such as effectiveness, safety, culture of excellence, and achievement of desired results (Allen-Duck et al., 2017). The introduction of AI transforms how these attributes manifest in daily practices. For example, exchanges during multidisciplinary meetings are enriched by AI's contribution while raising questions about maintaining practitioners' sense of personal efficacy and their position as experts, as in the cases studied in radiology by Galsgaard et al. (2022).
Clinical decision support systems based on AI also modify the dynamics of exchanges between professionals. As Rahimi et al. (2024) emphasize, their success largely depends on their ability to integrate naturally into existing practices and respect traditional modes of interaction between caregivers. This integration requires particular attention to algorithm transparency and explainability, essential elements for maintaining user trust.
We recognize a dynamic analogous to that experienced by the hospital sector about fifteen years ago with the arrival of Electronic Patient Records (EPR). A new triangulation is created with a tool that becomes the mediator between the patient and the caregiver, as Garofalo (2013) wrote in her medical ethics thesis.
The Electronic Patient Record has freed up caregiver time, but has this time been reinvested in human interactions? Has it saved time for caregivers with automated data entry? In addition to optimized data collection, has the EPR improved the quality of care?
Whether in this context yesterday or for AI today, it appears decisive to know how to keep humans at the center of the technology for a successful transition. To be convinced of this, we can cite Professor Bernardin's testimony from more than ten years ago regarding EPR: "It is the collaborative space where everyone, according to their level of accreditation, collects and deposits the information necessary for patient care. Beyond care, this computer architecture allows the construction of a database which, as it grows over time, will constitute a formidable tool for clinical research and teaching." Consequently, project implementation relies first on successful collaboration before (being able to) become a shared tool. This is an integration mode that we consider important to keep in mind for AI.
2.3 Integration Models and Professional Dynamics
The DoReMi approach, presented by Schoonderwoerd et al. (2021), proposes a structured framework for integrating AI into clinical practices. This methodology, focused on human-centered design, has identified interface patterns that facilitate the appropriation of AI systems by clinicians. The authors particularly emphasize the importance of adapting explanations to the context and expertise level of users.
The work of Nair et al. (2024) highlights the crucial importance of leadership and stakeholder engagement in the success of AI projects. Training also represents a major issue, as emphasized by Ronquillo et al. (2021) in their analysis of priorities for integrating AI into nursing care. The authors insist on the need to develop not only technical skills but also professionals' ability to interact effectively with these new tools while maintaining the quality of relationships with patients.
2.4 Theoretical Framework for Implementation
The implementation of AI in the healthcare sector must rely on a solid theoretical framework that reflects the complexity of healthcare organizations and their specificities. Recent work by Sendak et al. (2020) emphasizes the importance of an approach that considers the human body, and by extension the healthcare organization, as a "black box" whose understanding requires systematic and in-depth analysis.
This systemic perspective is particularly evident in the need for a human-centered approach (Human-Centered Design - HCD) for the development and implementation of AI systems. Schoonderwoerd et al. (2021) demonstrate that this approach not only improves system acceptability but also enhances their clinical relevance. The authors highlight that the success of AI projects depends on their ability to integrate into existing workflows while respecting established interaction modes.
The conceptual framework must also consider the evolutionary dimension of AI systems. Davis et al. (2019) have highlighted the phenomenon of "model drift" which requires continuous system updates. This technical reality has important implications for staff training and service organization. It underscores the need for a dynamic implementation approach capable of adapting to both technological and organizational changes.
Ethical governance constitutes another essential pillar of the theoretical framework. The work of Trupia and Mathieu-Fritz (2024) highlights the importance of studying how practitioners integrate AI into their daily practices and how this transforms their medical reflexivity. This approach helps understand the different positions of professionals regarding AI and adapt implementation strategies accordingly.
Stakeholder engagement also represents a central element of the theoretical framework. As Rahimi et al. (2024) point out, successful implementation relies on the ability to create synergy between different actors: clinicians, IT experts, managers, and patients. The authors emphasize the importance of a collaborative approach that promotes exchanges between these different groups.
Fifteen years ago, Prof. Bernardin and his team (2014) had already highlighted in a technical review the engagement of stakeholders as one of the key success factors. They had also highlighted the need to mobilize sustainable energy understood by all stakeholders, which seems to be recurring today with AI projects: "Strong creative energy is necessary to initiate the project, but it is the maintenance energy that one will know (be able) to inject in a sustainable manner that will be the real driver of its success. Do we have the human resources? Do we have the political will? Is the project well understood by everyone, from decision-makers and funders to end users? The answer to these questions conditions the rest."
Skill development constitutes the last pillar of this theoretical framework. Recent studies by Ronquillo et al. (2021) on the integration of AI in nursing care demonstrate the importance of training that goes beyond purely technical aspects. The authors emphasize the need to develop a deep understanding of the relationship between collected data and AI technologies used.
Thus, we conceive the necessity of an integrated approach that takes into account both technical aspects and human and organizational dimensions. The implementation of this theoretical framework, however, faces significant obstacles that merit thorough analysis to ensure the success of AI projects in healthcare.
3. Analysis of Barriers to AI Adoption
The integration of AI in healthcare facilities encounters numerous resistances that go well beyond simple technical challenges. Recent studies highlight a set of interdependent obstacles that affect the entire healthcare ecosystem.
3.1 Barriers Related to Skills and Training
Lack of training constitutes a major obstacle to AI adoption. The work of Busch et al. (2024) reveals that more than three-quarters of medical students have had no experience with AI in their curriculum. This gap in initial training results in a limited understanding of the potential and limitations of these technologies. Allam et al. (2024) particularly emphasize this issue in their multinational study, demonstrating significant knowledge gaps regarding AI in medicine and radiology.
3.2 Psychological and Professional Resistances
Fear of replacement by AI constitutes a significant concern among healthcare professionals. Huisman et al. (2021) have particularly highlighted that a significant percentage of radiologists fear partial replacement of their function. This apprehension directly influences their willingness to adopt these new technologies. Paradoxically, Jackson et al. (2024) observe that new generations of doctors perceive AI more as an ally than as a threat, suggesting a generational evolution in the perception of these technologies as well as an evolution towards a "professionalization" of AI promises as observed by Mathieu-Fritz and Vayre (2024).
3.3 Organizational and Structural Challenges
Nair et al. (2024) identify several major organizational obstacles. Insufficient leadership and poor change management can significantly hinder AI adoption. The authors emphasize the critical importance of middle manager engagement in the implementation process. Furthermore, the lack of alignment between different stakeholders - clinicians, administrators, technical staff, and patients - can create tensions that slow down technology adoption.
3.4 Ethical and Regulatory Issues
Ethical questions represent a substantial barrier to AI adoption. Reddy (2023) highlights the need for precise regulation that protects both patient interests and those of healthcare professionals. Personal data protection, medical responsibility, and algorithm transparency constitute major concerns that must be addressed. Price (2018) notably raises the complex question of medical responsibility in the context of AI-assisted medicine.
Interestingly, Desmoulin (2024) analyzes a divergence between legal expectations and professional concerns regarding the explainability of medical AI. Practitioners prioritize validation and data quality rather than algorithm explainability.
3.5 Technical and Infrastructural Constraints
AI implementation requires a robust and large-scale digital infrastructure, which raises questions for healthcare facilities about their approach to the Cloud. We will simply point out here that these technical requirements come with significant costs, both in terms of initial investment and maintenance, as highlighted by Rahimi et al. (2024).
3.6 Interoperability and Integration Issues
Johnson (2023) draws an instructive parallel with the experience of electronic medical records, emphasizing that integration and interoperability difficulties have historically constituted major obstacles to technology adoption in healthcare. These challenges persist with AI, requiring particular attention to system harmonization and their integration into existing workflows.
These different barriers interact and reinforce each other, creating a complex ecosystem of resistances that must be understood in their entirety to develop effective implementation strategies. As a counterpoint, it is therefore essential to consider key success factors for overcoming these obstacles.
4. Key Success Factors for AI Implementation
The complexity of identified barriers calls for a structured and multidimensional approach to ensure successful AI implementation in healthcare. Analysis of recent literature identifies three major determining factors that mutually reinforce each other: governance, training, and continuous evaluation.
4.1 Leadership and Integrated Governance
Management commitment and leadership quality constitute a fundamental pillar for the success of AI projects. Tarsuslu et al. (2025) demonstrate that healthcare professionals with digital leadership skills develop a more positive attitude toward AI and are better able to understand its benefits. This leadership capacity translates into better management of anxieties and resistances within teams. The establishment of robust ethical governance complements this dimension. Reddy (2024) proposes a framework that integrates technical, ethical, and organizational aspects, emphasizing the importance of a transparent approach to decision-making. This governance must also rely on a solid technical infrastructure, responsive support, and anticipation of maintenance needs and system updates. Establishing a culture of excellence, as defined by Allen-Duck et al. (2017), requires a clear and shared vision of implementation objectives. Leadership quality also manifests in the ability to mobilize necessary resources, both financial and human, and to ensure optimal allocation of these resources to support change. Khera et al. (2023) emphasize that this governance must particularly ensure equity in access to and use of technologies, a crucial aspect for maintaining the trust of teams and patients.
4.2 Training and Development of Collective Skills
Skill development represents a key factor in organizational transformation. Issa et al. (2024) emphasize the importance of structured integration of AI in healthcare profession education programs. This training must go beyond the simple acquisition of technical skills. Ronquillo et al. (2021) insist on the need to develop a deep understanding of the relationship between collected data and the technologies used. This approach allows healthcare professionals to maintain their decision-making autonomy while taking advantage of AI's potential. The collective dimension of learning proves crucial. Nair et al. (2024) highlight the importance of involving all stakeholders in a shared learning approach, transcending traditional service boundaries.
As during the implementation of Electronic Patient Records, continuous training remains essential. Busch et al. (2024) note that training must also address ethical and legal aspects of AI use, allowing professionals to develop a critical view of these technologies. Skill development must ultimately be part of a broader digital transformation strategy, as suggested by Tuncer and Tuncer (2024) in their analysis of nurses' attitudes toward AI.
4.3 Human-Centered Design and Continuous Evaluation
The human-centered approach constitutes the third essential pillar and must permeate the entire implementation process accompanied by continuous impact assessment. Sendak et al. (2020) emphasize that this approach must integrate ethical and social dimensions of healthcare. Human-centered design also requires particular attention to team dynamics and existing communication modes. Price (2018) raises the crucial question of medical responsibility in this decision support context, an aspect that must be integrated from the design phase. Continuous evaluation must focus not only on technical aspects but also on the organizational and human impact of AI systems, as advocated by Bibbins-Domingo (2022) in their commitment to research centered on clinical outcomes and equity.
5. Recommendations for Successful Implementation
The analysis of barriers and key success factors allows formulating concrete recommendations to guide AI implementation in healthcare facilities. These recommendations revolve around three main axes that reflect an integrated approach to change.
5.1 Establish a Robust Methodological Framework
Implementing an AI project requires a structured methodology that ensures consideration of all dimensions of change. Rahimi et al. (2024) advocate a progressive approach, centered on pilot projects, allowing experimentation and refinement of solutions before larger-scale deployment. This approach allows not only testing the technical viability of solutions but also evaluating their acceptability to end users and their impact on professional practices.
Establishing a clear governance framework constitutes a fundamental element of this methodology. Reddy (2024) insists on the need to precisely define, from the beginning of the project, the roles and responsibilities of each actor, decision-making processes, and control mechanisms. This governance must integrate not only technical and organizational aspects but also ethical and regulatory dimensions that frame AI use in healthcare.
Adaptation to the specificities of each medical discipline, as emphasized by Trupia and Mathieu-Fritz (2024), represents another crucial aspect of the methodological framework. The authors demonstrate that successful implementation largely depends on the ability to take into account the particularities of each specialty, both in its technical aspects and in its organizational modes and professional practices.
5.2 Develop a Stakeholder Support Strategy
Supporting different stakeholders represents a crucial element of success. Ronquillo et al. (2021) recommend implementing a comprehensive training program that covers not only technical aspects but also practical and ethical implications of AI use. These trainings must be adapted to different user profiles and their specific needs.
Change management must pay particular attention to healthcare professionals' concerns. Tarsuslu et al. (2025) suggest that developing digital leadership can help reduce these apprehensions and foster a more positive attitude toward AI. Gaglio and Mathieu-Fritz (2024) highlight the importance of carefully studying the sociotechnical configurations specific to each context of use. This approach allows identifying appropriate action levers and implementing support mechanisms adapted to field realities
5.3 Implement a System of Continuous Evaluation and Improvement
Regular impact assessment and system adaptation constitute essential elements to ensure the sustainability and effectiveness of AI solutions. Davis et al. (2019) propose a structured approach for monitoring and updating clinical models, taking into account not only their technical performance but also their clinical relevance and usability. This approach allows quickly identifying potential drifts and making necessary corrections.
Measuring impact on clinical outcomes, patient-centered care, and equity, as advocated by Khera et al. (2023), requires defining clear and measurable indicators. These indicators must cover different dimensions: quality of care, patient satisfaction, process efficiency, but also accessibility and equity in technology use. Monitoring these indicators helps guide system developments and ensure alignment with the project's initial objectives.
Implementing these recommendations requires strong commitment from all stakeholders and a clear vision of objectives to achieve. It must be part of a continuous improvement approach that allows adapting solutions to the evolving needs of healthcare facilities and their users.
6. Conclusion
The integration of AI in the healthcare sector represents a major transformation that redefines communication modes and interaction between different actors in the healthcare system.
Our analysis demonstrates that the success of this transformation relies on a balanced approach that places communicational dynamics at the heart of the change process. Recent literature thus emphasizes that AI should not be perceived as a simple technological tool but as a catalyst for new forms of professional interactions with particular attention to preserving healthcare professionals' role and expertise. This evolution of communicational practices is also observed in the caregiver-patient relationship, where AI can enrich rather than replace human interaction.
The identified barriers, whether related to skills, psychological resistances, or organizational issues, reveal the decisive importance of communication processes in change management. The identified key success factors emphasize the driving role that governance can play in facilitating communication between different stakeholders.
This approach also allows maintaining a balance between technological innovation and preservation of fundamental care values. In this regard, the proposed recommendations offer a structuring framework to guide this transformation, emphasizing the importance of information and communication flows at all levels of the organization. Furthermore, continuous impact assessment must particularly take into account the quality of professional exchanges and their evolution over time.
Ultimately, successful AI implementation in healthcare relies on our ability to create environments where technology and human communication mutually reinforce each other. This synergy, as demonstrated by the analyzed works, requires constant attention to relational dynamics and an approach that places humans at the heart of the digital transformation process.
Implementing these recommendations requires strong commitment from all stakeholders and a clear vision of objectives to achieve. It must be part of a continuous improvement approach that allows adapting solutions to the evolving needs of healthcare facilities and their users.
REFERENCES
Allen-Duck, A, J. C. Robinson, and M. W. Stewart. 2017. “Healthcare quality: A concept analysis”. Nursing Forum 52 (4): 377-386.
Allam, A. H, N. K. Eltewacy, Y. J. Alabdallat, T. A. Owais, S. Salman, and M. A. Ebada. 2024. “Knowledge, attitude, and perception of Arab medical students towards artificial intelligence in medicine and radiology: A multi-national cross-sectional study”. European Radiology 34 (6): 4393-4406.
Bernardin, G, M Garofalo, H Rottier, and C Carroger. 2014. Informatisation du dossier patient dans les secteurs de soins critiques du CHU de Nice. Revue Techniques hospitalières n°746, 19-23.
Bibbins-Domingo, K.. 2022. “The urgency of now and the responsibility to do more: my commitment for JAMA and the JAMA Network”. JAMA 328 (1): 21-22.
Busch, M. O., L. C. Adams, M. De Witte, N. Gorelik, J. S. Izquierdo-Condoy, A. B. Jackson, and K. K. Bressem. 2024. “Global cross-sectional student survey on AI in medical, dental, and veterinary education and practice at 192 faculties”. BMC Medical Education 24 (1): 1-20.
Davis, S. E., R. A. Greevy, C. Fonnesbeck, T. A. Lasko, C. G. Walsh, and M. E. Matheny. 2019. “A nonparametric updating method to correct clinical prediction model drift”. Journal of the American Medical Informatics Association 26 (12): 1448-1457.
Echajari, L., H. Jeanningros, and M. Lewkowicz. 2024. “Le codage de l'information médicale à l'épreuve de l'IA”. Réseaux 248 159-189.
Fiant, O.. 2024. “Au-delà de l'explicabilité”. Réseaux 248 80-107.
Gaglio, G., and A. Mathieu-Fritz. 2024. “IA, médecine et sciences sociales”. Réseaux 248 17-40.
Galsgaard, A., T. Doorschodt, A.-L. Holten, F. C. Müller, M. P. Boesen, and M. Maas. 2022. “Artificial intelligence and multidisciplinary team meetings; a communication challenge for radiologists' sense of agency and position as spider in a web?”. European Journal of Radiology 155 110231.
Garofalo, M.. 2013. L'informatisation du dossier de soins en réanimation : déshumanisation ou réhumanisation du soin ? [Mémoire DIU] Université de Nice-Sophia-Antipolis.
Huisman, M., S. Helleman, M. D. Bruijne, J. F. Veenland, and B. Van Ginneken. 2021. “An international survey on AI in radiology in 1,041 radiologists and radiology residents part 1: fear of replacement, knowledge, and attitude”. European Radiology 31 7058-7066.
Issa, W. B, A. M. Sayed, M. Alotaibi, M. A. Rashdan, H. S. Alsharqi, H. E. Hasan, and W. Khan. 2024. “Shaping the future: perspectives on the integration of Artificial Intelligence in health profession education: a multi-country survey”. BMC Medical Education 24 (1): 1-16.
Khera, R., A. Butte, M. Berkwits, Y. Hswen, A. Flanagin, M. Park, and K. Bibbins-Domingo. 2023. “AI in Medicine---JAMA's Focus on Clinical Outcomes, Patient-Centered Care, Quality, and Equity”. JAMA 330 (9): 819-820.
Nair, M., P. Svedberg, I. Larsson, and J. M. Nygren. 2024. “A comprehensive overview of barriers and strategies for AI implementation in healthcare: Mixed-method design”. PLoS ONE 19 (8): e0305949.
Price, W. N.. 2018. Medical malpractice and black-box medicine. In I. G. Cohen, H. F. Lynch, E. Vayena, & U. Gasser (Eds.), Big data, health law, and bioethics (pp. 194-213). Cambridge University Press.
Rahimi, A. K., O. Pienaar, M. Ghadimi, O. J. Canfell, J. D. Pole, S. Shrapnel, and C. Sullivan. 2024. “Implementing AI in Hospitals to Achieve a Learning Health System: Systematic Review of Current Enablers and Barriers”. Journal of Medical Internet Research 26 e49655.
Reddy, S.. 2024. “Generative AI in healthcare: an implementation science informed translational path on application, integration and governance”. Implementation Science 19 27.
Ronquillo, C. E., L. M. Peltonen, L. Pruinelli, C. H. Chu, A. Beduschi, K. Cato, and M. Topaz. 2021. “Artificial intelligence in nursing: Priorities and opportunities from an international invitational think-tank of the Nursing and Artificial Intelligence Leadership Collaborative”. Journal of Advanced Nursing 77 (9): 3707-3717.
Schoonderwoerd, T. A. J., W. Jorritsma, M. A. Neerincx, and K. van den Bosch. 2021. “Human-centered design for explainable artificial intelligence in clinical decision support systems: A design pattern approach”. International Journal of Human-Computer Studies 154 102684.
Sendak, M., M. C. Elish, M. Gao, J. Futoma, W. Ratliff, and A. Bedoya. 2020. “"The human body is a black box" supporting clinical decision-making with deep learning”. Proceedings of the 2020 conference on fairness, accountability, and transparency 99-109.
Tarsuslu, S., F. O. Agaoglu, and M. Bas. 2025. “Can digital leadership transform AI anxiety and attitude in nurses?”. Journal of Nursing Scholarship 57 28-38.
Tuncer, M., and M. Tuncer. 2024. “Investigation of nurses' general attitudes toward artificial intelligence and their perceptions of ChatGPT usage and influencing factors”. Perspectives in Psychiatric Care.