Our analysis of the WHO report on Ethics and Governance of Artificial Intelligence for Health

The World Health Organisation tasked Privacy International with reviewing their guidance on Ethics and Governance of Artificial Intelligence for Health. Here is our analysis of the final report.

Key points
  • Privacy International was asked to review the guidance on Ethics and Governance of Artificial Intelligence for Health.
  • The report is overall a thorough one and we hope governments and practitioners will take it into consideration.
  • The report attempts to develop a Human Rights framework to the use of AI in healthcare and acknowledges the risks and limitations of AI.
  • However, we believe there is still more to do to challenge the assumption that AI will necessarily result in positive outcomes.
News & Analysis
Logo of the World Health Organisation

Picture by Padrinan on Pixabay.

Last month, the World Health Organization published its guidance on Ethics and Governance of Artificial Intelligence for Health. Privacy International was one of the organisations that was tasked with reviewing the report. We want to start by acknowledging that this report is a very thorough one that does not shy away from acknowledging the risks and limitations of the use of AI in healthcare. As it is often the case with guidance notes of this kind, its effectiveness will depend on the willingness of governments and public healthcare practitioners to put into practice the guidance provided. Here is a summary of what this report got right, what the main take-aways are and what still needs to change in the way we talk about AI.

Developing a Human Rights framework to the use of AI in healthcare

In the introduction, the report makes direct references to the Universal Declaration of Human Rights, which reflects the WHO’s intention to anchor their guidance within a human rights framework. It also states that “for AI to have a beneficial impact on public health and medicine, ethical considerations and human rights must be placed at the centre of the design, development, and deployment of AI technologies for health,” a statement that reflects the demands made by civil society organisations working at the intersection of technology and human rights.

In our own submission for the UN High Commissioner for Human Rights’ report on the right to privacy and artificial intelligence, we had indeed recommended that the OHCHR “establish the need for a human rights-based approach to all AI applications and describe the  necessary measures to  achieve  it  (including  human  rights  by  design and human rights impact assessments).”

In keeping with that recommendation, section 4 of the WHO report is dedicated to “Laws, policies and principles that apply to artificial intelligence for health” and opens with a sub-section on artificial intelligence and human rights. The section highlights the different human rights covenants, conventions and treaties that should guide the application of AI technologies in the health sector, and establish them as a baseline for the protection and promotion of human rights.

In a separate section, the report addresses the ethical considerations around the use of AI in health and once again focuses on the impact on human rights by exploring the question of autonomy, protecting populations from harm and ensuring inclusiveness and equity.

Recognising the risks and limitations of AI

Another strong point of the report is its detailed analysis of the risks and limitations of AI. Among the risks of AI technologies employed for healthcare is the question of bias when datasets used to train AI fail to reflect the real world, and the consequences of these biases when it comes to health. “In health research, for example, AI could improve human understanding of disease or identify new disease biomarkers (38), although the quality of the data and whether they are representative and unbiased could undermine the results.” (Chapter 3.2 Applications of Artificial Intelligence in health research and drug development).

The report further acknowledges the nature of these biases in chapter 6.6 “Bias and discrimination associated with artificial intelligence”: they tend to exclude “girls and women, ethnic minorities, elderly people, rural communities and disadvantaged groups”, creating or exacerbating underlying discrimination practices.

The question of the digital divide and how those who are better connected will end up disproportionately benefitting from AI and health is also raised in the context of smart cities and their use of AI for health improvement in Chapter 3.4 Applications of Artificial Intelligence in public health and public health surveillance. “AI tools can be used to identify bacterial contamination in water treatment plants, simplify detection and lower the costs. […] One concern with such use of AI is whether it is provided equitably or if such technologies are used only on behalf of wealthier populations and regions that have the relevant infrastructure for its use.”  

The report acknowledges the risks that AI technologies could broaden and deepen, and highlights concerns around systematic data collection and data exploitation, as large datasets currently remain necessary to “train AI.” It also mentions the risk of micro-targeting, citing the work Privacy International has conducted on the issue. “AI can be used for health promotion or to identify target populations or locations with “high-risk” behaviour and populations that would benefit from health communication and messaging (micro-targeting) […] Micro-targeting can also, however, raise concern, such as that with respect to commercial and political advertising, including the opaqueness of processes that facilitate micro-targeting. Furthermore, users who receive such messages may have no explanation or indication of why they have been targeted. Micro-targeting also undermines a population’s equal access to information, can affect public debate and can facilitate exclusion or discrimination if it is used improperly by the public or private sector.”

The role of tech companies in reshaping the health sector is also a risk that is addressed in the report. In the chapter “Challenges in commercialization of artificial intelligence for health care,” the report reads: “An additional concern is the growing power that some companies may exert over the development, deployment and use of AI for health (including drug development) and the extent to which corporations exert power and influence over individuals and governments and over both AI technology and the health-care market.”

For us, the role of private companies in delivering AI technologies for public sector use, including healthcare, is a key issue that cannot be understated. Because of the increasing reliance by governments on AI applications for the delivery of a wide array of public services, PI believes that specific attention should be paid on the legislative framework governing public procurement of AI technologies and the safeguards to be put in place in contracting public services to private companies employing AI technologies. In our research on public-private surveillance partnerships, PI has identified some common concerns, including but not limited to: lack of transparency and accountability in the procurement processes; failure to conduct due diligence assessments; growing dependency on technology designed and/or managed by private companies, with loss of control over the AI applications themselves (to modify, update, fix vulnerabilities, etc.); and over-reliance on the technical expertise of the private company and there are also risk of vendor lock-in. In many cases, the private company supplies, builds, operates and maintains the AI system they deployed, with public authorities not having sufficient knowledge or effective oversight. The lack of an adequate legal framework is often compounded by limited enforcement safeguards provided for in contracts, resulting in limited or no venues for redress.

The same chapter of the WHO report also tackles the risk of monopolies, citing again the work Privacy International has conducted on the issue. “Monopoly power can concentrate decision-making in the hands of a few individuals and companies, which can act as gatekeepers of certain products and services and reduce competition, which could eventually translate into higher prices for goods and services, less consumer protection or less innovation”.


Challenging how we still talk about AI

While the report warns against “techno-optimism” and thoroughly acknowledges the risks of the deployment of AI in healthcare without appropriate safeguards as we saw in the previous section, the report does not  challenge the assumptions that the use of AI in healthcare will lead to more efficient healthcare systems, despite as of yet little evidence to back up this assumption.

The effectiveness and relevance of AI technologies in healthcare systems need to be carefully reviewed and AI applications need to be designed in ways that respect and protect human rights from the outset. As we have argued elsewhere, there is a real risk that the use of AI technologies by states or corporations will have a negative impact on human rights, including the right to privacy. But that's only the starting point.


The impact of AI goes beyond the right to privacy and, in the healthcare context, can result in tangible harms to our dignity and our equal access to healthcare. This is why each AI application in healthcare should undergo heightened scrutiny and, at a minimum, be supported by a human rights impact assessment. Where the human rights concerns are too great, regulators should be prepared to ban certain AI applications.

Only time will tell how this WHO report will contribute to shape the way AI technologies are rolled out in the healthcare sector across countries. Our experience suggests that ethical guidance does not replace the need for adequate legislation, supported by effective enforcement of human rights safeguards to address the risks that AI technologies pose on human rights.

You can access the WHO report here.