Recently, we hosted an AI discussion evening at our office with experts Jay Shaw and Christopher Ford bringing together colleagues and clients from across the healthcare ecosystem.
There were no slide decks, product demos or vendor pitches - just a room full of people putting hopes, fears, questions, and provocations about AI in healthcare on the table.
Here are a few reflections from the evening:
The healthcare industry is currently facing an unprecedented saturation point. According to the Deloitte 2025 Health Care Outlook, the volume of AI vendor pitches to health systems has tripled in the past year. Every software provider, from EHR vendors to ERP platforms, claims to have the “magic.” Yet adoption remains relatively shallow. While generative AI is a top three strategic priority for health systems, most organizations remain firmly in the experimental phase, with very few solutions scaled into production environments.
The real opportunity may not lie in deploying isolated AI tools, but in asking a deeper question: how can AI help us fundamentally redesign how care is delivered, not simply make today’s system run a little faster or more efficiently.
One of the most resonant themes of the evening was the idea that AI amplifies the systems it enters. If processes are broken, AI may simply make them faster and more scalable versions of the same dysfunction. If inequities exist in the system, AI risks reinforcing or widening those disparities. This also raises an important leadership challenge: how do we move beyond the hype and focus first on the problems they are trying to solve, and why, before determining where AI can meaningfully contribute?
The conversation inevitably returned to questions of governance and accountability. AI does not operate in isolation. It interacts with existing processes, data structures, and organizational cultures. If those systems already contain inefficiencies, biases, or inequities, AI has the potential to amplify them, faster and at greater scale as mentioned above. This raises uncomfortable but necessary questions for healthcare leaders. If AI amplifies systems, what exactly are we scaling? What guardrails need to be in place before these tools are embedded into clinical and operational workflows? And who ultimately holds accountability when decisions are influenced by AI systems?
What became clear through the discussion is that addressing these challenges will require intentional governance from the start: establishing clear oversight structures, continuously evaluating the impact of AI tools in practice, and ensuring that equity and patient outcomes remain central to how these technologies are designed and deployed.
Across many healthcare organizations, leaders are still grappling with how to define and measure return on investment for AI. The benefits often appear indirect, for example, time saved, improved decision support, or reduced cognitive burden, but translating those benefits into clear financial or operational metrics remains difficult. It also raises a deeper question: where is that saved time actually being reinvested in practice? Are we truly improving patient care and patient experience, or simply shifting capacity elsewhere? Ultimately, the conversation may need to move beyond efficiency alone and toward the real outcomes we hope AI will enable in healthcare.
Several discussions focused on where health data actually lives and how it is used by AI systems. Questions quickly surfaced about what assurances organizations truly have around data residency and jurisdiction, particularly when working with vendors that operate across multiple environments. Participants also raised concerns about whether models are learning from institutional data and what level of visibility organizations truly have into how that data may be used over time. Underneath these concerns is a broader issue: how organizations maintain oversight and control in an increasingly complex AI ecosystem.
One of the more thought-provoking parts of the discussion was framing the challenges of AI adoption through three emerging risks in healthcare AI.
Healthcare has seen waves of technologies that were introduced with excitement but linger long after their usefulness or evidence base fades. AI introduces the risk of algorithms continuing to operate without clear oversight, evaluation, or accountability.
Innovation can sometimes pull attention and resources away from solutions that are already effective. Several participants raised the concern that the healthcare sector could become overly focused on technological innovation while underinvesting in proven social and system innovations. AI should ideally augment these efforts rather than replace them.
Perhaps the most pressing concern discussed was the growing gap between the pace of AI implementation and the pace of robust and independent evidence validating safety, efficacy, and equity. Healthcare is fundamentally an evidence-driven field, yet AI tools are increasingly being deployed faster than rigorous evaluation can occur. This creates a difficult tension for healthcare organizations.
The conversation also touched on some provocative predictions from Gartner’s recent “2026 Predicts” report. Among them (US focused):
The conversations also returned to more fundamental questions:
Discussion forums like this remind me how valuable open, interdisciplinary dialogue is in moments of rapid change. At the start of the evening, I shared something with the room that ultimately proved true: These conversations about AI adoption much like the ones we have through the AI & Human-Centred Leadership Fellowship at the University of Toronto with AMS Healthcare, rarely leave us with tidy answers. Instead, they leave us with more questions than answers, and that is exactly what makes them lively, necessary, and timely.