Artificial intelligence (AI) is rapidly shifting from experimental pilots to mainstream clinical infrastructure, redefining how evidence, accountability, and ethics intersect in healthcare. This narrative review integrates insights from peer-reviewed studies and policy frameworks to examine seven cross-cutting aspects: bias and fairness, explainability, safety and quality, privacy and data protection, accountability and liability, human oversight, and procurement and deployment. Findings reveal persistent inequities driven by dataset bias and opaque design; the need for explainability tools that are validated, task-specific, and usable by clinicians; and the centrality of post-market surveillance for sustaining patient safety. Privacy-preserving methods such as federated learning and differential privacy show promise but demand rigorous validation and regulatory coherence. Emerging liability models advocate shared enterprise responsibility, while governance-by-design—embedding transparency, auditability, and equity across the AI lifecycle—appears most effective in balancing innovation with public trust. Ethical, legal, and technical safeguards must evolve together to ensure that AI augments, rather than replaces, clinical judgment and institutional accountability.

Governing Healthcare AI in the Real World: How Fairness, Transparency, and Human Oversight Can Coexist: A Narrative Review

Bailo, Paolo
Primo
;
Nittari, Giulio
;
Ricci, Giovanna
Ultimo
2026-01-01

Abstract

Artificial intelligence (AI) is rapidly shifting from experimental pilots to mainstream clinical infrastructure, redefining how evidence, accountability, and ethics intersect in healthcare. This narrative review integrates insights from peer-reviewed studies and policy frameworks to examine seven cross-cutting aspects: bias and fairness, explainability, safety and quality, privacy and data protection, accountability and liability, human oversight, and procurement and deployment. Findings reveal persistent inequities driven by dataset bias and opaque design; the need for explainability tools that are validated, task-specific, and usable by clinicians; and the centrality of post-market surveillance for sustaining patient safety. Privacy-preserving methods such as federated learning and differential privacy show promise but demand rigorous validation and regulatory coherence. Emerging liability models advocate shared enterprise responsibility, while governance-by-design—embedding transparency, auditability, and equity across the AI lifecycle—appears most effective in balancing innovation with public trust. Ethical, legal, and technical safeguards must evolve together to ensure that AI augments, rather than replaces, clinical judgment and institutional accountability.
2026
SCI
artificial intelligence
bioethics
health equity
legal
liability
patient safety
262
File in questo prodotto:
File Dimensione Formato  
sci-08-00036.pdf

accesso aperto

Tipologia: Versione Editoriale
Licenza: Creative commons
Dimensione 384.24 kB
Formato Adobe PDF
384.24 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11581/499505
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact