AI in Healthcare Must Be People-Centred, Say Global Experts. Is Maldives Ready?

As countries around the world turn to artificial intelligence (AI) to transform their health systems, the Maldives too stands at a critical juncture. With increasing interest in digital health technologies, there is growing momentum to integrate AI tools into the country’s health infrastructure. Yet, doing so without clear principles and safeguards could lead to unintended harms, particularly in a system that already faces challenges of scale, equity, and trust.

A new report by the Center for Global Health AI, “The CHAI Responsible AI Guide,” offers a practical framework for countries like the Maldives to move forward thoughtfully. It makes one thing clear: AI should not be viewed simply as a tool to modernise healthcare, but as a deeply political and ethical undertaking. It must be anchored in the lived realities of patients and health workers alike.

- Advertisement -

To start with, the Maldives must ask a foundational question: What problems are we solving? Rather than deploying AI for its own sake, the country needs to identify health challenges where AI adds real value, for instance, improving diagnostics in remote atolls, reducing administrative burdens on overworked doctors, or analysing health trends to strengthen disease prevention.

The report strongly cautions against outsourcing critical thinking to machines. Human oversight should remain at the centre of healthcare decision-making, particularly in sensitive areas such as diagnostics, triaging, and patient engagement. AI tools can be powerful assistants, but they should never replace human judgement. In a small island nation with limited specialist expertise, this balance is especially vital.

Moreover, the guide warns against “pilot project fatigue,” a cycle where governments trial new AI systems without follow-through or scale. For the Maldives, the challenge is to build continuity. AI must be integrated into long-term health strategies, supported by local capacity and clear governance structures.

Public trust will also be central. Communities must be informed about how their data is used, how AI makes decisions, and who is accountable when things go wrong. This means not only technical transparency but also engaging citizens in plain language. In a country where digital literacy varies widely, especially outside the capital, such engagement cannot be an afterthought.

Crucially, the report insists on equity: AI must not deepen existing divides. If AI tools are trained on biased data or only deployed in wealthier regions, they risk reproducing systemic inequalities. For the Maldives, this is a real concern, given disparities in healthcare access between Malé and the outer islands.

To ensure accountability, the Maldives could adopt practices like algorithmic audits, public registers of health AI systems, and clear grievance mechanisms for patients. Partnerships with regional universities, civil society groups, and regulators could help build the legal and institutional frameworks required.

Ultimately, the Maldives should approach AI in healthcare not as a shortcut, but as a long-term investment in building resilient, inclusive, and trustworthy systems. By grounding AI use in ethical principles and real-world needs, the country can avoid the pitfalls of techno-solutionism and instead, build a digital health system that truly serves its people.

- Advertisement -