How Artificial Intelligence Governance in Healthcare Is Evolving

Conversations are shifting from innovation and adoption to governance and oversight.

As artificial intelligence (AI) becomes more widely adopted in healthcare delivery, conversations are increasingly shifting from digital innovation towards clinical governance, assurance, and board oversight.  

Across Australia and internationally, recent developments signal this clear shift with focus on defining how AI is used, monitored, and controlled in real-world settings. 


How Conversations Around AI are Evolving 

The Australian Commission on Safety and Quality in Health Care (ACSQHC) recently released its AI transparency statement, signaling a more structured and operational approach to AI adoption in healthcare and system governance. The statement sets out clear expectations around internal AI policy and guidance, transparency, and mandatory human oversight of outputs, alongside alignment with emerging high-risk AI guardrails. This reflects an important shift in focus, from whether AI should be used, to how it is safely governed within existing accountability structures. 

Internationally, the conversation is also maturing. At HIMSS 2026, discussions have increasingly focused on trust, risk, and the role of human oversight in AI-enabled healthcare. This reinforces a growing consensus that safe and effective AI adoption depends on robust governance frameworks, clear accountability, and sustained clinical oversight rather than technology alone. 

In Singapore, the Ministry of Health (MOH) has highlighted the importance of system coordination in AI governance. As healthcare clusters develop their own AI solutions, there is growing recognition that parallel development can lead to duplication, inefficiency, and inconsistent standards. MOH has supported the need for central coordination to strengthen governance, reduce duplication, and support system-wide alignment. 


The Imperative for Strong Governance Frameworks 

These governance conversations are also increasingly intersecting with regulatory expectations. Legal and regulatory experts anticipate that enforcement activity around AI in healthcare will intensify as tools become more embedded in routine care delivery (Healthcare IT News, 2026). While new AI-specific regulation is possible over time, the immediate trajectory is expected to run through existing frameworks such as clinical governance, fraud and abuse laws, and data privacy obligations. As these developments continue to unfold, strong governance frameworks will be imperative to ensure AI is used safely, transparently, and with clear accountability across clinical and operational decision-making. 


Join our AI Governance Training Session Today 

To support healthcare organisations navigating this evolving landscape, the ACHS Improvement Academy’s Artificial Intelligence and its Governance in Healthcare training builds practical capability in the safe, ethical, and effective use of AI in healthcare. This 3-hour interactive session explores key governance areas including approval processes, credentialing of AI tools, patient consent, privacy and data management, adverse event reporting, and emerging compliance frameworks. Register for an upcoming session here
 

Reference List 

Australian Commission on Safety and Quality in Health Care (ACSQHC). (March 2026). AI transparency statement

Healthcare IT News. (April 2026). The overarching importance of AI governance and risk

Healthcare IT News. (April 2026). Health systems should prepare now for increasing enforcement around AI use

Ministry of Health, Singapore. (March 2026). AI INNOVATION BALANCED WITH COORDINATION TO MAXIMISE HEALTHCARE IMPACT.