Digital Health: what does it mean for your rights and freedoms

Governments have been digitising their health systems and, more broadly, healthcare. We provide an overview of what digital health initiatives have been rolled out, for what purpose, where and by who, as well as some of the concerns they raise for the enjoyment of fundamental rights and freedoms.

Long Read

Table of contents


Like in many other sectors of governance, governments around the world have decided to embrace innovations in technology and data processing capabilities to develop systems that would enable them to progressively realise social, economic and cultural rights. As early as the 1990s we've seen the use of ICT in the health sector, and in the last decade a massive push alongside the data revolution with governments digitising their health systems and, more broadly, healthcare.

Whilst it is important to acknowledge these developments can contribute towards realising economic, social and cultural rights such as the right to health, these changes are raising issues not only about privacy, security and data protection, but also dignity, non-discrimination, and equality, all issues which are at the core of PI's philosophy. In the evolving data intensive and exploitative ecosystem, too often governments and industry see new opportunities to exercise power over individuals: opportunities for surveillance, income generation, market domination, and control. Without careful consideration of the impact and the risks, the promises which are meant to come with innovation and tech advancements may not be realised, and in the many cases they may create more harm than good.

Long Read

In this piece we outline the main discussions and measures we need to see being systematically adopted to inform decision-making about digital solutions in the health sector, and provide examples of where these were not integrated in decision-making processes and with what consequences.

The context and approach taken thus far by governments is particularly concerning given that few governments are taking steps to ensure that there is a clear, precise, and enforceable legal framework in place to effectively regulate digital health programmes, to respect and protect human rights. In particular, under international data protection frameworks, and in some domestic legal systems, data concerning health is awarded additional safeguards, including limitations on the permitted grounds for processing because of the risks associated with their misuse. However, the threats of exploitation of health data remain rife, and PI has documented several efforts by private companies to obtain and monetise health data, e.g. menstruation apps, Bounty, mental health data, companies selling diet programmes and generally trying to infiltrate the health sector during the Covid-19 pandemic and well before too. At a time when health data is increasingly attributed commercial value, the digital systems that support and mediate the provision of healthcare should be under particular scrutiny.

As we delve deeper into these digital health systems, important questions arise regarding their usefulness, and the role they play in effectively fulfilling human rights, including the rights to privacy and to health. While some are well-intentioned, many of these systems may fall short of their intended purpose. They often do not even deliver what they promise. And, in the process, they may actually cause harm by exposing individuals, and in particular those in vulnerable positions, to additional risks, many of which governments and their healthcare authorities and implementation partners, including private companies and other non-state actors, are ill-prepared and ill-equipped to tackle effectively.

Key Resources

The use of technology and data in the realisation of economic, social and cultural rights raises, among others, some key concerns in relation to the protection, respect and promotion of the right to privacy.

The Right to health

The right to health, known formally as the right to the enjoyment of the highest attainable standard of physical and mental health, was first articulated internationally in the 1946 Constitution of the World Health Organization (WHO). It was included within Article 25 of the 1948 Universal Declaration of Human Rights as part of the right to an adequate standard of living, and recognised as a human right in the 1966 International Covenant on Economic, Social and Cultural Rights under Article 12. It is also specifically included in the Convention on the Rights of the Child (Article. 24), the Convention on the Elimination of All Forms of Discrimination against Women (Article. 12) and the Convention on the Rights of Persons with Disabilities (Article. 25).

These and other international instruments provide for an array of obligations on Member States to take steps to progressively fully realize the right to health, along with other economic, social and cultural rights including obligations to i) respect, i.e. refrain from interfering directly or indirectly with the right to health, ii) to protect, i.e. prevent third parties from interfering with the right to health, and iii) to fulfil, i.e. adopt appropriate legislative, administrative, budgetary, judicial, promotional and other measures to fully realize the right to health.

The right to health has a number of principles and elements which require special clarification in the digital health context, as they provide a framework through which to understand and navigate essential issues around the rights of individuals, and obligations and responsibilities of those providing health services, i.e. Member States and those designing digital health solutions. The following "key principles and elements" were drawn from Fact Sheet 31 on the right to health by the Office of the UN High Commissioner for Human Rights and the World Health Organisation, and the WHO's "Human rights and health" fact sheet.

Key principles and elements

  • Freedoms : such as to be free from "non-consensual medical treatment, such as medical experiments and research or forced sterilization, and to be free from torture and other cruel, inhuman or degrading treatment or punishment."
  • Entitlements : equality of opportunity for everyone to enjoy the highest attainable standard of health, equal and timely access, provisions of information, and participation of the population in health-related decision-making processes.
  • Equality and Non-discrimination : Health services, goods and facilities must be provided to all without any discrimination.
  • Available, accessible, acceptable and good quality : provision must be in sufficient quantity, be physically accessible, and ensure affordability of access. This principle is broader than the provision of care, and includes accessibility of information, which should be medically and culturally acceptable, and "must be scientifically and medically appropriate and of good quality", as well as efficient.
  • Participation and inclusion : The population(s) affected must be able to participate in health-related decision-making processes.
  • Accountability : Member States must be able to demonstrate the measures that they are taking to comply with their obligations, which includes establishing mechanisms for reporting and monitoring progress which must be accessible, transparent and effective, and that rights-holders are able to claim their rights.

A rights-based approach to providing access to healthcare

The right to health provides for various elements which must be in place as States comply with their obligations of availability, accessibility, acceptability and quality. As with other social and economic rights, States must fulfil these obligations in accordance with the principle of equality and progressive realisation. Data exploitation can lead to both discrimination and receding on the provision of these rights.

As the World Health Organisation (WHO) has outlined in its Global Strategy on Digital Health (2020-2025), digital health should be developed according to a set of principles including "transparency, accessibility, scalability, replicability, interoperability, privacy, security and confidentiality." The strategy clearly acknowledges the potential risks for people, their data, and their enjoyment of fundamental rights, and calls for strong legal and regulatory bases to protect "privacy, confidentiality, integrity and availability of data and the processing of personal health data". The WHO's strategy indicates its commitment to incorporate lessons learned and mitigate ethical, legal and governance challenges "including data privacy and sharing and ensuring safety and protection of individuals within the digital health environment." These principles and commitments must inform decisions made around the deployment of digital health initiatives.

Even within times of emergency these remain central pillars of any decision-making. The WHO articulates this within its policy on data sharing in times of public health emergencies, as does the UN Committee on Social, Cultural and Economic Rights. Furthermore, data protection laws around the world often identify health data as sensitive personal data which must be subject to additional protections and safeguards; including legal grounds to collect, process and share data.

These principles were reiterated during the Covid-19 pandemic with the Director-General of the World Health Organisations, the UN High Commissioner for Human Rights, the Council of Europe Commissioner for Human Rights all calling on countries to respect human rights principles when fighting Covid-19.

Other rights implicated

In addition to the right to health, a rights-based approach to guaranteeing the right to the enjoyment of the highest attainable standard of physical and mental health requires careful consideration of other fundamental rights including, but not limited to, the right to non-discrimination and the right to privacy.

Bolstered by technological advances, the processing of personal data creates new opportunities to profile, target and manipulate; and data-exploitative practices have seeped into the provision of healthcare services. Privacy International has documented the emergence of these tactics in the reproductive health sector, with the proliferation of data-intensive health applications and data-sharing practices. Opaque automated or data-based decisions raise unique concerns in instances where life-changing decisions are made, not just at a certain point in time but throughout the process, from deciding treatment to monitoring of that treatment over time.


Key Resources

Privacy has become all the more essential in the age of data exploitation. The way data and technology are now deployed means that our privacy is under increased threat  and on a scale that we couldn’t have imagined 20 years ago, outside of science fiction – the ways in which we can be tracked and identified have exploded, alongside the types and scale of information available about us.

Right to privacy: Privacy is a fundamental right, essential to autonomy and the protection of human dignity, serving as the foundation upon which many other human rights are built. Privacy enables us to create barriers and manage boundaries to protect ourselves from unwarranted interference in our lives, which allows us to negotiate who we are and how we want to interact with the world around us. Privacy helps us establish boundaries to limit who has access to our bodies, places and things, as well as our communications and our information. Any interference with the right to privacy must be in accordance with the law and necessary and proportionate to the aim pursued. Find out more about why privacy matters.

Right to non-discrimination: Health data can be linked to the types of discrimination addressed in human rights instruments which, together with constitutional protections, enshrine the right to non-discrimination on the basis of gender, ethnicity, race and legal status, amongst others. Another dimension of non-discrimination include ensuring equal access and preventing bias.

What does regulation and governance of data look like in the health sector?

At the simplest level, data is being processed in the interaction between a health provider and patients, and this serves to inform decisions about care and treatment, and ensure continuity of care. At the most complex, we see a myriad of institutions sharing and generating information on the patients and medical staff in order to manage the provision of healthcare from managing the supply chain to monitoring compliance of health and medical protocols to interacting with patients. For instance, one of big applications of AI in healthcare might be using AI to read X-rays and draw conclusion from those, so even doing an x-ray is about generating data. The other thing I'd mention is medical research.

ICT in health care systems is not new, and related programmes have existed since the early 1990s. But what has changed over the last decade is the advancement in technology and data processing and exploitation capabilities which are providing ever increasing powers to collect, process and gather intelligence. In parallel, private sector entities are increasingly providing or mediating access to what were traditionally understood to be public services, such as health care. Important questions arise both in relation to the implications for rights, risks, and the safeguards to be adopted.

At the different levels of the health systems, there is a need to consider actors and their obligations, and how to protect the rights of individuals. Given the extraterritorial nature of our digital ecosystem which allows products and services to be provided by an entity in one jurisdiction to another, it is likely that some actors will be subject to different jurisdictions, and so have different and varying degrees of obligations; in some instances, however, they could be subject to none.

Similarly, power imbalances must be carefully addressed in terms of the agency and autonomy of the individual. For some of the groups most affected – those in most vulnerable and marginalised positions – are often also those most in need of accessing health care, there is often no choice in their transaction with the service provider, and this power imbalance must inform the requirements associated with the processing of personal data.

Processing of health data

"Health" data generally enjoys higher levels of legal protection. Across jurisdictions, data protection laws are likely to explicitly recognise the special status of health data, and may categorise it as "sensitive personal data" or "special category of data". This category of data attracts higher safeguards, including limitations on the permitted grounds for processing it.

It is also important to note that the higher protections extend to data which reveals sensitive personal data. Through profiling and the use of proxy information (for example, using someone's purchase history to infer a health condition), it is possible for those processing data to infer, derive and predict sensitive personal data from other non-sensitive personal data.

Whilst various national and international governance frameworks are in place to regulate the processing of personal data including in the health sector, according to one paper, a global digital health framework in the form of comprehensive digital health strategies and other mechanismsis only at a nascent stage.

Having said that, in recognition of the sensitive nature of health-related data and the information which can be derived from it, important safeguards to processing of such data have been reaffirmed by a variety of stakeholders, for example:

Digital health

In the late 1990s, we saw the emergence of e-health programmes which were enabled by the rapid advancement in technology and data processing. "E-health" became a term used to refer to "the combined use of electronic communication and information technology in the health sector… the use in the health sector of digital data - transmitted, stored and retrieved electronically - for clinical, educational and administrative purposes, both at the local site and at distance". The World Health Organisation (WHO) defines electronic health (eHealth) as "the use of information and communication technologies for health.".

We've seen the use of data and technology across the healthcare sector from health apps, electronic medical records, to smart supply-chain management, drones delivery of medication, and automated diagnostics, amongst many others. Digital solutions in the health sector have been portrayed as "a critical solution to challenges and gaps in the delivery of quality health care and essential to achieving the Sustainable Development Goals.". The WHO draft Global Strategy on Digital Health 2020 – 2024 presents its vision of digital health technologies "that allow people to manage their health more effectively, improve caregiver-patient communication and monitor the impact of policies on population health."

Types of digital initiatives in the health sector

Alongside digital identity systems, we have seen the deployment of data and technology in the health sector from telemedicine, and the use of AI in a variety of fields from supply chain management to diagnostic and decision-making on eligibility and assessment of insurance premiums.

Below we outline the main areas where we have seen the use of data and technology in national health sector. This is not an exhaustive list. Other digital initiative seen includes wearables and genomics and molecular surveillance, amongst others, but many of these are yet to be deployed beyond pilot schemes and mainstreamed in health sector.

Healthcare Information Systems : Across the world, we have seen Healthcare Information Systems (HIS), a broad term generally used when referring to electronic systems designed to manage healthcare data. The drive and justification for promoting their adoption often goes back to the need to improve scientific quality by generating and accumulating standardised data and better interoperability and interconnectivity of health care data to enable the understanding, prediction, prevention and impact evaluation of health care interventions. They can be categorised according to various functions: 1) management of day-to-day needs of a healthcare institution or system, such as planning and budgeting; 2) clinical support, such as diagnosis and treatment; 3) surveillance and epidemiological information on the patterns and trends of health conditions and programs; 4) creation of formal publications and other documentation; and 5) additional technical information for a technical task not directly related to clinical support, such as conducting laboratory tests.

Electronic medical records : Electronic medical records are automated systems based on document imaging or systems which have been developed within medical practice or community health centre, and use "digitized record[s] to capture and store health information on clients in order to follow-up on their health status and services received." EMRs help to monitor and track continuity of care processes including sharing information with patient, scheduling appointments and delivery of appropriate care. They may also be used to support client/patient communication, for example using SMS.

Health systems management : Spilling from the digital identity sector, we've seen the digital identifications integrated in health systems management. They have been promoted as a tool to improve access by enabling enrolment in health programmes and tracking health history to provide better sustained care. They often consist on the need to provide a national digital ID in order to access care and then all following transactions are recorded in relations to that ID. If not national ID system is place a new digital identification system specifically for accessing healthcare may be created setting up a unique identifier which will serve to track the care provided to that particular individual. Sometimes such systems may include biometric markers such as fingerprint or iris scan. Another area of health systems management in particular around planning has seen the use of AI to assist health sector to undertake and plan complex logistical tasks including the management of medical supply chain, amongst others.

Telemedicine : Telemedicine refers to the use of ICT to improve patient outcomes by increasing access to care and medical information with applications falling into two types of transactions i) health professional-to-health professional or ii) health professional-to-patient. In industrialised regions, the majority of telemedicine services focus on diagnosis and clinical management, and in low and middle income countries it is used to link healthcare providers with specialists, referral hospitals, and tertiary care centres. Over the years, limitations emerged as a result of poor uptake and scalability, concerns emerging from the lack of regulatory mechanisms to oversee and manage processing of personal data, and general risks around hardware and software.

Health surveillance : The term "health surveillance" comes from established public health methodology to refer to the ongoing checks and monitoring of health data, with a view of planning, implementation, and evaluation of public health practise, disseminate information as well as being used as a tool to monitor health issues and behaviours of populations. The push for data-driven health surveillance systems has been going on for many years and has evolved to include the use of new technologies like AI. Some within the health sector, such as those working on the HIV response, have been discussing for many years how best to take a rights-based approach. This is quite an isolate development not yet seen systematically in the health sector. And even within the HIV sector some challenges remain, with proposals still emerging requiring invasive interferences with people's fundamental rights. In 2015, the Kenyan National Government through President Uhuru Kenyatta issued a directive to all County Commissioners and various government Ministries to collect up-to-date data and prepare a report inter on all school-going children living with the Human Immunodeficiency Virus (HIV) and Acquired Immune Deficiency Syndrome (AIDS), which would entail the processing of biometric data. The proposal was challenged before the High Court of Kenya which found it to be unconstitutional.

Management and response to public health emergencies : As had been seen with the Ebola and MERS outbreaks, digital initiatives, including AI, have come to play a central role in the management and response of public health emergencies, from location tracking - as seen during the 2014 Ebola crisis in West Africa and 2015 Middle East Respiratory Syndrome (MERS) South Korea outbreak in 2015 - to the use of mobile phone data to predict the evolution of the outbreak and to monitor patterns in people's movements with the aim of tackling the spread. However, as studies have shown, there is limited evidence to suggest that movement data or location data proved useful in tackling and predicting the spread of either of those two diseases. We witnessed the same with the Covid-19 pandemic, which saw the development digital apps for contact tracing and for the monitoring and enforcement of quarantine and social distancing orders. These digital initiatives were and continue to be especially prevalent with varying level of success and efficiency and subject to controversy.

Research and monitoring of healthcare systems and records : In the research sector, there has been a strong push for better monitoring and evaluation of healthcare programmes with proposals emerging which would raise considerable risks to individuals and key populations already at risk. A few years ago, the Kenyan national health authorities developed a plan, funded by the Global Fund to Fight AIDS, TB and Malaria, to conduct a study of HIV and key populations which would entail the processing of biometric data. Biometric identifiers have been proposed to be integrated in public health surveillance programmes to address duplication and increase accuracy. In health research there has also seen the use of AI to search and analyse electronic health records to support biomedical research, quality improvement and optimization of clinical care.

Diagnosis and learning: Enabled by all of the above which is resulting in the generation of vast information being available this has evolved to be used to support diagnostics and for learning. Examples include the development of smart surgical video recorder equipped with real-time AI for automatic anonymization of sensitive surgical video frames stored locally to share for learning, and for AI application to enable analysis of radiology and medical imaging to support diagnostics although this is still relatively novel.

A general lack of a human rights approach to digital health

Something we have observed, and has been confirmed by experts from the global public health sector, is that whilst some issues around equality and issues of access (i.e. digital divide) are discussed with regards to digital health and health in general, there is limited consideration for the right to privacy beyond data protection compliance issues.

What is needed is a comprehensive human rights approach which provides a lens through which to assess developing threats to which individuals are exposed through the proliferation of digital health initiatives. Only by exploring the broader human rights implications from the right to health to non-discrimination and privacy, amongst many others, can we ensure that solutions proposed integrate the necessary safeguards and mitigation strategies to protect people and their rights.

Current shortcomings

The deployment of digital health initiatives has not been accompanied at both the policy and operational levels with adequate legal, regulatory and tech protections. As with many digital solutions, the failure to consider these elements makes beneficiaries of digital health vulnerable to exploitation and exclusion while rendering the potential benefits of these digital initiatives to improve service delivery sub-optimal in terms of reach, coverage, scale and sustainability.

Many of the issues raised with digital health mirror concerns about digital identity systems as they too process large amounts of individuals' personal data, which include demographic, medical and clinical related data. And whilst this data may contribute to achieving the delivery of better care and treatment by informing policy-makers, medical professionals and public health bodies around the needs, priorities and trends in their sectors, safeguards need to be put in place to ensure that this data is only used fairly and for the legitimate purpose for which they were identified for.

Some of the shortcomings observed in the deployment of digital health initiatives include not foreseeing and assessing the new concerns emerging with the digitisation of the sector, namely: i) New risks associated with the visibility and tracking of patients, especially when combined with identification systems, where the absence of registration results in patients being denied access to services. ii) Multiple sources of data being brought together through data-sharing or database integration, and the risks associated with the vast intelligence which can be gathered as a result and the security challenges arising from multiple points of access, and ii) the failure to understand and respond to the complex infrastructural requirements of such initiatives including access to secure, reliable internet.


Key Resources

Rising concerns around austerity, transparency, efficiency and financial management have fed into a narrative of technology as a magic cure-all to socio-economic and political issues.

Why the "digital" makes it all that little bit more complicated

PI has been monitoring how healthcare systems and benefit systems around the world are becoming reliant on the collection and processing of vast amounts of personal data.

Access to health and other social and economic rights is often tied to the provision of unique identifiers; and decision-making models are increasingly data-intensive and are often reliant on profiling and automated-decision making.

And unfortunately, some of the more ambitious proposals by companies and governments are reliant on untested expansions to these already problematic practices.

There is no question that technology can help governments tackle some key challenges in the provision of healthcare services but. However, before the inception of any technology-assisted initiatives, there need to be open, inclusive deliberations as to whether to deploy them in a particular setting or for a particular purpose. Once this first step has been concluded, and the deployment of such technologies is justified, then safeguards and due process guarantees need to be taken into account in order to identify and mitigate risks. Otherwise, the same programmes that are intended to facilitate access will amplify pre-existing shortcomings and injustice.

As in the sectors of social protection and migration, for example, there are various worrying trends observed also in the digital health sector which highlight why digitisation access to and deliver of healthcare is raising serious concerns for the protection of people and their rights. Below we outlined some of these issues.

Discrimination and exclusion by design

Everyone must have access to healthcare, without any discrimination. This is why equal access is central to the right to health.

Despite this, in some countries there are already discriminatory measures emerging.

The digital divide Whilst latest figures are encouraging with more people having access to Internet, a recent ITU report highlighted that the global gender gap has grown in some regions including the Middle East, Asia and the Pacific, and Africa. This is one of the reasons why UNDP warns "relying on digital technologies as a primary system or strategy within the health sector may impact access and availability, and inadvertently exacerbate inequalities, contributing to the digital divide." And this not only applies to high end technologies, i.e. mobile devices, but also basic necessities such as electricity. This is particularly an issue in low and middle-income countries, but it can also affect rural populations in high-income countries. Ongoing challenges of unequal access because of costs but also availability of infrastructure mean that digital initiatives need to continue to exist alongside offline services, as they otherwise risk excluding millions, often those with the most needs to access affordable care. ADC in Argentina reported similar concerns that e-health would benefit mostly middle or high-income sectors in large urban areas, which would deepen existing inequalities in access to care.

Long Read

Governments around the world are increasingly making registration in national ID systems mandatory for populations to access social benefits, healthcare services, and other forms of state support. By virtue of their design, these systems inevitably exclude certain population groups from obtaining an ID and hence from accessing essential resources to which they are entitled.

Provision of a national ID as a pre-condition for access The inability to provide an ID card should never result in a denial of services. We are not alone on this point; the UN Special Rapporteur on Extreme Poverty and Human Rights previously questioned existing mandatory ID requirements for accessing health care services.

However, the reality is that strict ID requirements effectively exclude people from receiving care. In countries like Chile, Uganda, India and Kenya, to name a few, providing an ID card is a pre-condition for accessing any public service. Even in normal circumstances, we see numerous cases of exclusion because many are unable to register and get an ID card. And if upheld, these requirements would be counter-productive to an effective response to this public health crisis.

And yet, some governments are seeing a digital identity as a solution to facilitate the provision of emergency healthcare and other public services. For example, the Jamaican Government is using Covid-19 as justification to fast-track the creation of a national identification system (NIDS) to help it with its aid and benefits distribution. It will be interesting to see how it proceeds given that in April 2019, the Constitutional Court of Jamaica had struck down the country's mandatory biometric National Identification and Registration Act and the National Identification and Registration System (NIDS) ruling that they violated constitutional privacy protections, and had to be reviewed before the government could proceed.

Categorise to filter and exclude Creating a database enables the categorisation according to selected criteria depending on what data is processed which could include gender, ethnicity, race, nationality or legal status, amongst others, to serve as as basis for decision-making whether access to the service itself is granted or not.

Migrants are often most negatively affected, as migrants, asylum seekers, and refugees may be limited in accessing healthcare out of fear of coming forward and risking exposing their identities; and worries about costs too. Fortunately, these implications have led to some countries suspending such policies. For instance, the Irish government gave assurances to "treat them [migrants] with dignity and with absolute privacy and patient confidentiality, as will their social work system, during this time of emergency." Portugal decided to regularise the status of thousands of persons with pending cases so they could access universal healthcare during the pandemic.

Automated discrimination and bias : The various uses of AI - from diagnostic to eligibility - raise concerns that outcomes seen in sectors such as the criminal justice system, policing and recruitment may be reproduced in the healthcare context, resulting in bias and discrimination leading to inaccurate and unequal outcomes and predictions "of health outcomes across race, gender, or socioeconomic status".

Enabling 360 view and tracking

As with digital identity systems, many of the digital initiatives enable a 360 view of an individual and to track their transactions at different stages: be it through a unique health identifier which will provide a detailed historical record of every transaction between an individual and a healthcare provider, or more sophisticated tools such as wearables or applications which enable tracking not just of one's interaction with a healthcare provider but an array of other information including location data, and interaction with third parties.

Mission creep: Once it's there it's too tempting

Mission creep occurs when data is used for another purpose than for which it was initially processed, and importantly which was declared to the individual at the time of collection. Preventing mission creep needs to be built into the design and governance of health systems and requires both technical and legal measures to prevent data from being used for another purpose than for which it was intended for in the first place.

Some have pointed to heightened concerns that once data is available it is perceived and treated as a free for all, and that the data available can be used for other purposes. Examples of mission creep have emerged around how biometrics data collected for digital health purposes30243-1/fulltext) could be used for other purposes such as forensics or criminal proceedings. This was very much at the centre of the successful challenge of civil society to prevent the Kenyan government from creating a biometric database of certain persons living with HIV including school-going children, guardians, and expectant and breastfeeding mothers living with HIV. More recently, civil society successfully countered plans of the Kenyan national health authorities, funded by the Global Fund to Fight AIDS, TB and Malaria, to conduct a study of HIV and key populations which would require the processing of biometric data. Both initiatives raised concerns than the mere existence of such data would be used by the government for targeting criminalised populations in Kenya.

Data-driven eligibility criteria

Governments around the world are increasingly making registration in national digital ID systems mandatory for populations to access healthcare services as well as social benefits and other forms of state support. By virtue of their design, these systems inevitably exclude certain population groups from obtaining an ID and hence from accessing essential resources to which they are entitled. We have seen this play out in different ways, for example in Kenya with discrimination against specific groups with ID vetting for ethnic and religious minorities, logistical failures resulting in delays and errors in Uganda's National Identity Card system ("Ndaga Muntu") preventing thousands in particular women and the elderly for accessing healthcare. In India, technical exclusion and ubiquitous linking of the Aadhaar card similarly prevents individuals from accessing basic state support, including food rations. These issues have been replicated and amplified during the pandemic, with some governments introducing the presentation of ID as a pre-requisite to access Covid-19 vaccinations. In India, vaccine appointments are managed through a mobile app which requires linking with Aadhaar, potentially excluding millions from accessing the vaccine.


One area where we have seen new technologies like AI/ML being used is in the sector of digital health. Whilst much of it remains in pilot stage and/or limited scale, we're increasingly seeing the use of AI for diagnostics, health research and drug development. Other uses of automation are clinical care - especially identification of individuals at risk, self-management of care and home-based care - as well as health surveillance and public health emergencies preparedness. Each raise concerns for human rights and access to healthcare in terms. For example, resorting to AI for decisions without room for questioning where a system is solely automated inevitably leaves out the human element. AI has been used to predict pregnancy amongst adolescents in Argentina, an incident which raised questions about the data used to train the AI, as well as concerns in relation to the agency and autonomy of the individual within this prediction model. Bias found in AI technology has also led to examples of algorithms used for health decisions leading to less spending in Black communities than on white patients despite same level of need. And there is one area where we have seen significant advancements being made in terms of scalability: various countries are starting to use AI for health systems management and planning, where it is used to complement personnel in undertaking tasks as well as to support complex decision-making to identify fraud/waste, assess staffing needs and resourcing, and mapping trends in patient behaviour, i.e. missed appointments, among others. The use of AI for fraud management is a practice we've seen already in the welfare sector where automated decision-making is used to assess eligibility in the first instance and then to identify and police those who may be abusing the system. This is subjecting those requesting such assistance to arbitrary and invasive surveillance and monitoring. In some countries, such as the UK, this sort of tactics for fraud detection are being formalised and institutionalised through mechanisms like the National Fraud Initiative.

Security and integrity concerns: expanding the exposure and attack surface

Technologically complex systems are inherently vulnerable to intrusions or data breaches. There are numerous high-profile examples of digital health systems being breached. If some of the most well-resourced governments (and companies) in the world are unable to protect their most sensitive data sources, it is reasonable to assume that resource-constrained governments and humanitarian agencies will face significant challenges to appropriately securing databases, while making them 'honey pots' for attackers.

In the case of digital health systems, there are many concerns associated with and resulting from poor security management particularly at the state of design and then maintenance.

One common associated risk is data breaches. Data breaches in the health sector are common: the 2019 HIV data leak from Singapore, the leak of over 2.5M medical records in the United States in 2020, and the leak of 500,000 medical records in France in early 2021, to name a few.

As explained above, health data is of a particularly sensitive nature. Therefore, any data breach affecting medical records is extremely serious from a human rights standpoint. But the negative consequences flowing from the breach can be manifold, diverse and far-reaching. In addition to the privacy harms that necessarily attach to such data leaks, the exposure of medical records can put patients' security and welfare at risk. Civil society has long expressed concern about the potential dangers of poor data security policies, particularly in the context of reproductive health, which in many countries is heavily stigmatised. Those fears were made out in 2020, when a young rape victim's pregnancy details were leaked in Brazil, prompting anti-abortion protesters to block access to the hospital where she was due to terminate the pregnancy. In January 2019, it was discovered that the HIV-positive status of 14,200 people in Singapore, as well as their identification numbers and contact details, had been leaked online following a breach of the HIV registry managed by the Ministry of Health.

What we have observed is that security is often an afterthought, rooted in a lack of digital literacy and uncertainty as to the risks and potential harms. Ultimately, these difficulties flow from a lack of understanding of how the systems work – hardware and software -, the implications of any decisions made, and the data ecosystem in which we operate.

Tech industry and health data exploitation

It is important to note that there are few instances where governments are able to design, deploy and maintain such digital systems themselves. The complexity of these systems and the highly technical know-how required to create them has led to the growth of the 'government-industry complex' that manages and regulates social protection programmes like healthcare. Some of the concerning features of this 'government-industry complex' include:

  • poor governance of social protection policies, including the absence of open, inclusive and transparent decision-making processes;
  • limited transparency and accountability of the systems and infrastructure;
  • access is tied to a rigid national identification system;
  • excessive data collection and processing;
  • data exploitation by default; and
  • multi-purpose and interoperability as the endgame.

Industry not only provide solutions to governments but through the delivery of their own services they also feed the broader data exploitation ecosystem. Industry has identified the health sector as a fertile ground for innovation often leading to data exploitation. They play different roles from providing tech "solutions", i.e. infrastructure, but also some of the bigger tech giants such as Google, Microsoft and Amazon are involved at different, complementary levels from infrastructure to data management, analysis and product development. Add to that the hidden industry of data brokers mining vast amounts of personal data for commercial purposes, and the web of actors interacting with health data becomes ever more intricate, opaque - and untraceable.

Why the drive for digital

As note elsewhere, the health sector is one of many to have embraced the digital revolution. The drivers for this trend vary, but a common feature is the pure tech-solutionism which posits that technology is the solution to socio-economic and political problems facing our societies.

Below we outline two of the main narratives driving the digital revolution in the health sector.

Facilitate access and enable empowerment

A huge drive behind digitalisation is the approach to technology as a tool for empowerment whereby individuals would be in control while supposedly democratising access. While that is possible, unlocking that promise requires building that approach and outcome into the design, deployment and maintenance of any digital solution, and that's not been the case in many sector including the health sector.

Across sectors including the development sector technology has been hailed as an essential tool to achieve the 2030 Agenda for Sustainable Development, and in particular SDG 3 Ensure healthy lives and promote well- being for all at all ages.

The World Health Organization (WHO) Global Strategy on Digital Health 2020–2025 emphasised digital technologies as an essential component and an enabler of sustainable health systems and universal health coverage, and strategies from UNDP and other international and national actors have made a similar emphasis.

Some of the arguments assume that shifting care to a patient-based approach, where individuals manage their own health-related data via a variety of digital tools, including online portals or application, results in the individual taking responsibility and having control. In practice, however, it is unclear how much this empowers an individual as another open question is at what stage of their experience is the individual "empowered" - and how empowerment is defined. Is empowerment defined in terms of accessing information, being given the authority to make decisions about the care one receives, or control over how one's data is processed and over decisions made on the basis of this information?

As noted by UNDP, this promise of empowerment can only start to make sense if digital health initiatives are "developed, implemented and monitored in a way that respects, protects and fulfils ethics and human rights."

Efficiency, fraud prevention and saving money

With finite resources being allocated by governments to the management of healthcare (and other social services) demands and pressure on the health sector to provide better quality of care are increasing, leading to the accelerated exploration of ways to be more efficient. Within this umbrella of efficiency comes also the obligation to ensure funds are used wisely, and are not wasted either because of the way the system operates or because of fraud.

Arguments of efficiency have justified the move towards digital solutions in various parts of the health sector, from supply chain management to improved diagnostics and for processing eligibility and delivery of care.

It is important to be clear on whether, and if so where within the healthcare ecosystem technology, actors can deliver on promises of efficiency; and not to treat them all in the same way with the same expected promises and risks.

Whilst technology can be part of the solution to make aspects of our bureaucratic governance systems more efficient and transparent, the scope of its application and the way in which it has been deployed within this sectors raises serious concerns about the outcome.

As noted by the UN Special Rapporteur on extreme poverty and human rights in his 2019 annual report on the digital welfare state: "…the introduction of various new technologies that eliminate the human provider can enhance efficiency and provide other advantages but might not necessarily be satisfactory for individuals who are in situations of particular vulnerability. New technologies often operate on the law of averages, in the interests of majorities and on the basis of predicted outcomes or likelihoods."

News & Analysis

The World Health Organisation tasked Privacy International with reviewing their guidance on Ethics and Governance of Artificial Intelligence for Health. Here is our analysis of the final report.

The WHO also noted that while AI can help in certain aspects when it comes to timely allocation of resources of supply chain management, it warned in its guidance on the use of AI in health that: "…such technologies, if designed for efficiency of resource use, could compromise human dignity and equitable access to treatment. They could mean that decisions about whether to provide certain costly treatments or operations are based on predicted life span and on estimates of quality- adjusted life years or new metrics based on data that are inherently biased. In some countries in which AI is not used, patients are already triaged to optimize patient flow, and such decisions often affect those who are disadvantaged or powerless, such as the elderly, people of colour and those with genetic defects or disabilities".

Furthermore, often arguments for efficiency and fraud prevention are overstated to start with and so too are the results. These sorts of conclusions are often drawn and hailed despite the lack of initial baseline on the level of fraud or wastage, and the evaluations as to whether the digitisation and digitalisation of the system actually resulted in better care being provided, and if yes for whom? And also a clear assessment that the digital element is what led to the results, or whether other most significant factors at play.