BLOG

Who Governs the Algorithm? Integrating AI Governance into Australia's Clinical Governance Framework

Nicole Jahn
Posted by
Nicole Jahn
on
March 26, 2026

Artificial intelligence (AI) is already in Australian hospitals - reading medical images, flagging patients at risk of deterioration or readmission, summarising clinical notes, and supporting clinical decisions. Despite these opportunities, significant risks exist, including algorithmic bias, data drift, patient privacy and security concerns, lack of explainability, and unclear accountability when adverse outcomes occur. The question is whether Australian healthcare organisations have the governance frameworks required to manage these risks?

AI fails differently

Clinical governance misses something fundamental about how AI systems fail. A misconfigured EMR produces an obvious error. A drug interaction alert that fires incorrectly gets noticed by clinicians and reported. AI systems by contrast can fail silently. When input data diverges from training data - through shifts in patient demographics, evolving clinical practice, or unannounced vendor updates - model performance can degrade without any visible sign to the clinician. This phenomenon, known as data drift, produces no error message, no alert, no reportable incident and no escalation. It just quietly gets worse. In healthcare, quietly getting worse has consequences for patients.

The Epic Sepsis Model, deployed across hundreds of US hospitals, had to be decommissioned at one institution during the COVID-19 pandemic because demographic shifts had fundamentally altered the relationship between fevers and bacterial sepsis. The model degraded without triggering any of the oversight mechanisms that would have caught a conventional clinical error (Wong et al., 2021).

You cannot govern what you cannot see

Before clinical governance teams can manage AI risks, they need to know what AI systems are in use in their organisation. In most Australian health services, few organisations can answer that question with confidence.

AI tools can enter organisations through multiple pathways that bypass traditional governance processes. Some tools arrive as IT investments approved through procurement but without clinical review. Others are adopted at ward or department level by clinicians who find a useful tool and start using it. Many are quietly embedded in existing vendor platforms through feature updates that no one in clinical governance approved or was even notified about. Cloud-based systems have made this easier than ever, as there is no longer a dependency on on-premise infrastructure to create a governance checkpoint.

The result is system invisibility. Governing bodies and boards cannot receive reports on AI tools they do not know exist.  They cannot assess the safety of systems that were never submitted for review. Named clinical owners cannot be assigned to tools that were never formally deployed. When something goes wrong, accountability may be unclear, sitting between vendors, IT teams, and individual clinicians, often with no single person who owns the clinical consequence.

This is not a technology problem – it is a clinical governance problem and is likely already occurring in organisations that are otherwise meeting their NSQHS accreditation obligations comfortably.

We have the foundation, but it stops short

Australia is better placed than most countries to respond to AI clinical risks. The National Model Clinical Governance Framework, built on the NSQHS Standards, is one of the most mature clinical accountability structures in the world. Its requirements for governing body accountability, safety culture, consumer partnership, and continuous improvement are the right foundation for AI governance.

The limitation is that the framework was designed for human-delivered care supported by technologies with stable risk profiles. It assumes that problems get reported, investigations follow, and the improvement cycle runs on what gets captured. AI's failure modes can bypass this model entirely.

ISO/IEC 42001, the international standard for AI Management Systems, provides the complementary framework that fills this gap. It requires governance across the full AI lifecycle, accountability, defined performance thresholds, drift detection, and structured reporting upward to management. For organisations already accredited against the NSQHS Standards, most of the governance infrastructure already exists. The task is integration, not reinvention.

What clinical governance teams can do now

None of this requires ISO 42001 certification.  The argument at the heart of my paper is that Australia's clinical governance framework and ISO/IEC 42001 are designed to work together. ISO/IEC 42001 governs the AI system: it asks whether the system is responsibly built, validated, and managed. Clinical governance governs the use of AI in patient care: it asks whether the system is safe and appropriate for the clinical context in which it is deployed. Neither framework alone is sufficient.

Clinical governance does not need to develop deep technical AI expertise, but it needs to own the clinical consequence components.

For health service organisations already accredited against the NSQHS Standards, accountability structures, risk management processes, incident reporting systems, and consumer partnership obligations are already established. What is missing is the deliberate extension of those structures into the AI domain, and the specific mechanisms that AI's failure modes require.  The following priority actions are a matter of clinical governance committees deciding to act within the authority they already have and using ISO/IEC 42001 as the reference architecture to do it.

Know what you have. You cannot govern AI systems that are not visible to the organisation. ISO/IEC 42001 requires organisations to define the scope of their AI management system: to identify the AI systems they develop, provide, or use, and the contexts in which they operate. Clinical governance committees should apply exactly this logic. Some health service organisations cannot answer with confidence what AI systems are in operation, how they were procured, or what governance oversight they received at deployment. That invisibility is where the risk increases. A clinical governance-led inventory of every AI tool with clinical touchpoints is a key patient safety exercise. Where tools cannot meet basic scrutiny, the committee should have the authority to suspend their use pending review.

Establish a risk-based approval pathway. Once you know what you have, you need a governed process for what comes next. ISO/IEC 42001 Clause 6.1 requires organisations to assess AI-related risks and determine appropriate controls proportionate to those risks. Clinical governance committees should apply the same logic to AI tool approval. Not every AI system carries the same clinical consequence, and governance intensity should reflect that. An approval pathway that classifies tools according to their potential for patient harm, with oversight requirements proportionate to that risk, gives clinical governance a structured mechanism for making those distinctions. What must be clear is scope. The boundary of clinical governance's remit should be drawn around the clinical use case and consequence, not the technology type. Administrative tools with no patient care touchpoints may sit within enterprise AI governance. Anything that touches the clinical pathway does not. Approval is also not a one-time event. Model updates, changes in clinical context, and changes in the patient population should trigger revalidation.

Every organisation deploying clinical AI needs both layers: an enterprise AI governance function to manage system-level and technical compliance, and clinical governance retaining authority over what is safe and appropriate for patient care. Clinical governance approval should be a prerequisite to enterprise-level sign-off.

Name a clinical owner for every AI system. ISO/IEC 42001 Clause 5 requires top management to define and allocate responsibilities for AI governance across the organisation. In clinical settings this must go further. Every deployed AI tool should have a named clinical owner accountable for its ongoing performance and safety, and that authority must include the power to halt the system where an issue is identified. Without a single person holding this authority, a committee can review a concern in good faith and still act too slowly to prevent harm. This person should be identified before a system goes live, not after an adverse event or a near miss emerges.

Define the scope of each AI tool before deployment. ISO/IEC 42001 Annex A controls A.6.2.4 and A.6.2.5 require verification and validation of AI systems prior to deployment, including representative test data, defined release criteria, and documented deployment plans. These controls frame validation in technical terms in ISO/IEC 42001. Clinical governance must define what validation means in a clinical context. Just as a clinician has a defined scope of practice, an AI tool should have a formally defined and approved scope for clinical use. This can include the clinical decisions it may support, the patient populations for which it has been validated, and the conditions under which it should not be used. Where a tool is used outside this scope, it should be treated as a clinical governance event.

Set performance thresholds prospectively. ISO/IEC 42001 Clause 9.1 requires organisations to determine what needs to be monitored and measured in relation to their AI systems, and how and when results will be analysed and evaluated. This is the key for what clinical governance needs to operationalise. Clinical governance’s role here is not to design the monitoring system but to define what matters clinically. This includes which thresholds, if breached, represent a patient safety risk, and at what point deteriorating performance should trigger suspension rather than review.

For every AI tool in clinical use, define what acceptable performance looks like before deployment, how it will be measured, and what will trigger escalation or review. Monitoring should be capable of detecting data drift over time, differential performance across patient subgroups, and clinician override rates as a proxy for loss of clinical confidence.  

ISO/IEC 42001 Annex A.10 requires organisations to actively monitor third-party AI suppliers, including changes to models, datasets, and evaluation parameters, and makes clear that accountability for a vendor-supplied AI system does not rest with the vendor. It rests with the organisation that deployed it. This obligation should be built into vendor contracts: if a model is updated, the organisation must be notified and must review whether existing performance thresholds remain appropriate. Deterioration should be detectable before it causes harm, not after. Systems that bypass these checks forgo these critical safeguards.

Ensure patients are informed. ISO/IEC 42001 controls A.8.1 to A.8.5 require that interested parties have the information needed to understand the risks and impacts of AI systems, including reporting of adverse impacts and incidents. In clinical settings this aligns directly with the consumer partnership obligations Standard 2 of the NSQHS Standards already requires. Patients have a right to know when AI is involved in decisions about their care, and to request human review of an AI-influenced decision. Clinicians retain the responsibility to explain AI tools to their patients, but they cannot do that well without organisational support. The obligation to build disclosure into consent processes, develop consumer-facing materials, and ensure clinicians understand the tools they are using sits with the organisation, not with the individual clinician.

Build AI literacy into the workforce. ISO/IEC 42001 Clause 7.2 requires organisations to determine the necessary competence of persons doing work that affects AI performance. In clinical settings, this obligation must extend to those providing governance oversight, not only those operating AI systems directly. Clinical staff cannot recognise or report AI failures they do not understand. Mandatory AI awareness training should cover how AI systems fail, what data drift looks like in a clinical workflow, how to identify automation bias, and what escalation pathways exist. This is key safety training, and it is a prerequisite for everything else discussed in this paper.

The question is who owns this

AI is not waiting for clinical governance systems to catch up. Tools are being procured by clinical units, added to vendor platforms, and deployed through processes that bypass clinical governance oversight entirely. Cloud-based systems receive model updates through vendor-managed release cycles that no one in clinical governance may ever know occurred.

The NSQHS Standards require governing body accountability for patient safety. That accountability cannot be exercised over systems that are not reported, not monitored, and not subject to any clinical governance oversight.

AI is becoming increasingly embedded within clinical workflows, but governance has failed to keep pace. The question is no longer whether clinical governance should assume responsibility for AI governance; it is whether clinical governance will acknowledge its existing accountability for AI in clinical systems and act before preventable harm occurs.

References

Australian Commission on Safety and Quality in Health Care. (2017). National model clinical governance framework. ACSQHC. https://www.safetyandquality.gov.au/sites/default/files/migrated/National-Model-Clinical-Governance-Framework.pdf

CSIRO. (2024). AI trends for health care. https://aehrc.csiro.au/wp-content/uploads/2024/03/AI-Trends-for-Healthcare.pdf

Department of Health, Disability and Ageing. (2025). Safe and responsible artificial intelligence in health care: Legislation and regulation review. Australian Government. https://www.health.gov.au/sites/default/files/2025-07/safe-and-responsible-artificial-intelligence-in-health-care-legislation-and-regulation-review-final-report.pdf

Finlayson, S. G., Subbaswamy, A., Singh, K., Bowers, J., Kupke, A., Zittrain, J., Kohane, I. S., & Saria, S. (2021). The clinician and dataset shift in artificial intelligence. New England Journal of Medicine, 385(3), 283–286. https://doi.org/10.1056/NEJMc2104626

Guan, H., Bates, & Zhou, D.L. (2025)., Keeping Medical AI Healthy and Trustworthy: A Review of Detection and Correction Methods for System Degradation.  IEEE Transactions on Biomedical Engineering, doi: 10.1109/TBME.2025.3642706.

International Organization for Standardization & International Electrotechnical Commission. (2023). Information technology Artificial intelligence Management system (AS ISO/IEC 42001:2023). Standards Australia. https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-42001-2023

Rivkin, D. (2025). Governing AI in health care: Six critical imperatives. Ainspire Tech Consulting. https://ainspire.ai/whitepapers/ai-governance-healthcare

Wong, A., Cao, J., Lyons, P. G., Dutta, S., Major, V. J., Ötles, E., et al. (2021). Quantification of sepsis model alerts in 24 US hospitals before and during the COVID-19 pandemic. JAMA Network Open, 4(11), e2135286. https://doi.org/10.1001/jamanetworkopen.2021.35286