
As part of my ongoing development work in AI governance, I have been studying AI practitioner material through AI Career Pro. One article that stopped me in my tracks was James Kavanagh’s analysis of the fall of Cruise, a US autonomous vehicle company.
The technical failure involved a vehicle dragging an injured pedestrian after its sensors failed to interpret the situation correctly. What ultimately destroyed the company, however, was not the accident itself but Cruise’s response to it. The Cruise incident highlights an uncomfortable truth. In safety-critical systems, organisations are often judged less by whether failure occurs and more by how transparently and effectively they respond when it does.
Reading this, I found myself thinking about healthcare, where the stakes are equally serious.
As AI adoption accelerates across healthcare, safety governance is becoming a strategic leadership responsibility, not just a technical one.
Diagnostic support tools, clinical scribes, predictive analytics, automated triage systems, and decision-support tools are already in use, and adoption is accelerating. My interest in AI governance comes from a desire to ensure these systems improve clinical outcomes rather than introduce harm. One lesson stood out clearly. Deployment of AI in healthcare depends on a genuine safety culture.
In other words, the use of AI will be assessed not only through clinical safety lenses, but through regulatory, reputational, and ethical governance frameworks. Boards and executive teams will increasingly be accountable for AI system oversight in the same way they are for clinical risk.
A genuine safety culture is one in which ‘just culture’ principles create psychological safety to report errors and near misses without fear, where leadership models openness, and where systems make it easy to surface and learn from what goes wrong. Without this, errors remain invisible and opportunities for learning are lost.
Australia has recognised this through the National Safety and Quality Health Service Standards, and most state health systems, including WA Health, have invested heavily in embedding just culture principles within clinical incident management.
Cultural change alone is not sufficient for AI safety. Healthcare also requires monitoring and reporting systems adapted to the specific nature of AI error. When a clinician makes an error, there is usually a discrete incident that can be identified, reviewed, and learned from. AI failures may present differently. A miscalibrated diagnostic system may not produce a single dramatic failure. Instead, it may create gradual statistical drift in outcomes across thousands of patients over time, something an individual clinician may never directly experience as an identifiable error.
A safety culture for AI therefore needs to extend beyond traditional incident reporting. Continuous post-deployment surveillance of AI tools is required, alongside defined performance thresholds, proactive monitoring for unexpected outcome patterns, and clear pathways for clinicians to raise concerns about system performance. Ideally, this occurs before errors accumulate into visible adverse events.
Australia already has a regulatory foundation for external reporting when problems occur. Clinical AI tools classified as software as a medical device (SaMD) have oversight by the Therapeutic Goods Administration (TGA), including mandatory adverse event reporting. Public access to adverse events is available through the Database of Adverse Event Notifications (DAEN), although I don't find it easy to identify AI adverse events.
The TGA itself undertakes active signal detection (review and analyse events for patterns) across reported events and, following consultation in 2024, has identified post-deployment performance monitoring of AI tools as an emerging regulatory gap. Work is underway, but legislative timelines are necessarily long while AI adoption continues to accelerate.
Globally, the AI Incident Database captures publicly known AI incidents across sectors and supports cross-industry learning. What does not exist yet is something between public regulatory reporting such as the TGA DAEN and open incident databases: a confidential, healthcare-specific AI reporting mechanism where clinicians, data scientists, and developers can raise concerns without professional risk and where learning feeds back into system improvement.
Other safety-critical industries have addressed similar challenges. Aviation, for example, established confidential near-miss reporting through the Aviation Safety Reporting System. Key features include independence from the regulator, genuine confidentiality, meaningful immunity for reporters, and active dissemination of lessons learned.
Healthcare already has strong safety foundations. The question is whether organisations and external authorities develop surveillance, reporting, and accountability systems that match how AI errors emerge. It also raises an important question about transparency. The openness expected of clinicians when reporting errors may increasingly need to apply equally to organisations deploying AI systems.
The lesson from Cruise is that technical capability alone does not ensure safety. Organisations operating safety-critical AI systems will ultimately be judged not only on whether failures occur, but on how openly and responsibly they respond when they do. I would welcome perspectives from others working in Australian health governance, digital health, or safety and quality roles who are thinking about these challenges.
Kavanagh, J. (2025). The need for high-integrity AI governance. AI Career Pro. https://governance.aicareer.pro/blog/why-we-need-high-integrity-ai-management
Australian Commission on Safety and Quality in Health Care. (2021). National Safety and Quality Health Service Standards (2nd ed., updated May 2021). ACSQHC. https://www.safetyandquality.gov.au/standards/nsqhs-standards
Therapeutic Goods Administration. (2025). Artificial intelligence (AI) and medical device software regulation. Australian Government. https://www.tga.gov.au/products/medical-devices/software-and-artificial-intelligence-ai/manufacturing/artificial-intelligence-ai-and-medical-device-software-regulation
Therapeutic Goods Administration. (2025). Database of Adverse Event Notifications (DAEN) — medical devices. Australian Government. https://www.tga.gov.au/safety/adverse-events/database-adverse-event-notifications-daen
Therapeutic Goods Administration. (2025). Clarifying and strengthening the regulation of Artificial Intelligence (AI). Australian Government. https://www.tga.gov.au/resources/consultation/consultation-clarifying-and-strengthening-regulation-artificial-intelligence-ai
Responsible AI Collaborative. AI Incident Database. https://incidentdatabase.ai
NASA Aviation Safety Reporting System. ASRS overview. https://asrs.arc.nasa.gov