![A grid of heart monitors](/sites/default/files/styles/middle_column_cropped_small_1x/public/2024-11/mw-QxGSSfatjRs-unsplash.jpg?itok=KdCj0Gix)
Photo by MW on Unsplash
Photo by MW on Unsplash
Harnessing new digital technology to improve people’s health is now commonplace across the world. Countries and international organisations alike are devising digital health strategies and looking to emerging technology to help solve tricky problems within healthcare. At the same time, more and more start-ups and established tech companies are bringing out new, and at times innovative, digital tools aimed at health and wellbeing.
Wellness apps, consumer wearables and medical AI are almost certainly here to stay. But at PI we’re concerned that these digital tools may not always have been designed with people’s privacy, autonomy, and rights in mind. As a result, people could be being asked (even if inadvertently) to make unfair sacrifices when seeking to improve how they manage and understand their health.
We’ve written elsewhere about the risks and benefits of digital health in general and why health and privacy must not be traded off against each other. In this piece, we take a closer look at some of the specific digital tools that can infringe people’s privacy, and in particular at how four Big Tech companies (Google, Apple, Microsoft and Amazon) are now involved in healthcare. We then discuss how a tech-first approach to healthcare, as potentially prioritised by the tech industry, could have long-lasting and potentially negative consequences for society as a whole.
In this section, we outline types of digital tools that are deployed in healthcare and identify the effects they can have on both health and privacy. The examples listed below are not meant as an exhaustive list of digital health tools (the number of them is growing all the time!) but are examples of tools where there are potential risks of significant impacts on privacy.
It is now common for patients to see their doctor remotely, over the phone or on videocall, rather than in person. This is sometimes called ‘telemedicine’ or ‘telehealth’. Video calls can also be used between health professionals, for example to link providers in remote areas to specialists in city clinics.
Many people use apps and websites to understand and manage their symptoms and health conditions. Apps are also used by healthcare providers for formal consultations and prescription management. Social media platforms such as Facebook and YouTube are even becoming increasingly relied on as sources for health information.
Remote consultation may result in a need for remote treatment. Drones are now being developed to deliver medicines, blood and other medical supplies.
Electronic medical records help to monitor and track healthcare provided to a patient. They can be used to share information about a patient’s condition, support communication, and check effectiveness of treatment. They are also collected and compiled by private companies to support research.
Some health systems require a national digital ID (or other unique identifier) to access care. Such systems may include biometric markers such as fingerprints or iris scans. They may also result in peoples’ medical records being combined with other digital ID datasets, such as ones relating to residency, education or tax status.
In addition to medical records, electronic healthcare information systems (HIS) are used to manage healthcare data for wider purposes such as assessing drug effectiveness or population health dynamics. They can support both clinical decisions and hospital business management.
HIS and other medical datasets may also seek to include data about a person’s physical characteristics and genetic makeup. This can be distinguished from other medical data because it is about a person’s inherent characteristics rather than particular traits, symptoms or conditions that they have.
A number of devices now exist that can be worn on the body and track people’s behaviour, symptoms and vital signs. Watches like FitBit and Apple Watch are perhaps most well-known, but you can also get rings, glasses, prosthetics and more that monitor your health and activity.
There are many ways that AI is used in the health sector to augment human judgment. This area looks set for rapid growth as governments and businesses alike seek to harness the power of AI to drive efficiency and innovation in healthcare.
While some digital health tools are designed and delivered by public healthcare providers, many are also dependent on the private sector. Digital health tools are often complex and depend on specialist technical expertise developed within the private sector. While this is not inherently problematic, public-private partnerships do create additional risks such as:
It’s not just new or specialist companies working in health tech either. Big name companies such as Google and Amazon are engaging in healthcare as part of their business and investment strategy in various ways.
But Big Tech’s business often relies on the acquisition and processing of vast amounts of people’s data. The centrality of data and the value of large datasets to both healthcare and to Big Tech creates incentives for them to work together. The large global expenditure on healthcare only magnifies these incentives: the World Bank’s records on health expenditure show that globally over US$1000 per person is spent on healthcare each year (this is around 10% of GDP).
To give a sense of the scale of activity in the healthcare sector by Big Tech firms, we’ve collected just some of their products, services, acquisitions and investments in the explorer below. Through these, we can see Big Tech engaging in the healthcare sector in the following ways:
Click the arrows to expand and see the examples of Big Tech in healthcare
It might be easy to get excited about the potential of some of the healthcare tech listed above (as well as futuristic sounding things like robot surgeons and using AI to detect cancer), but it’s also important to remember just how much sensitive and personal information is at stake when using digital health tools. If these tools do not properly respect privacy and treat people as people rather than data-sources, they have the potential to do just as much (if not more) harm than good.
As well as the harms that can result from data breaches and the loss of confidentiality, anonymity and trust, there is a bigger picture to consider too. Health is central to our humanity as living, breathing beings: how we manage healthcare can be revealing about how we believe we should treat each other. It can even indicate the sort of society we want to live in. Below we consider some of the potential implications of taking the wrong pathway for digital health.
Companies’ involvement in healthcare is just as dependent on opportunities for business growth and diversification as it is on improving public health. The sort of healthcare this delivers may therefore foreground an increased reliance on tech and data collection as the way to meet our everyday needs. How to improve public health and how to use data and technology to deliver improvements and/or savings in healthcare are not necessarily the same challenges. If done wrong, Big Tech’s approach to healthcare may end up putting data rather than people first.
The first priority in healthcare must be meeting everyone’s inalienable right to health. New technology is just one tool for doing so. Fulfilling that right for all means providing a diverse and inclusive approach to healthcare that accounts for wider societal norms and needs, running a variety of facilities and programmes, and involving people in decision-making. Technology must follow rather than lead these activities.
A techno-driven form of healthcare may in fact have negative impacts on people’s health – whether because of the anxiety induced by use of apps and social media or the risk of poor quality pseudohealth (or ‘wellness’) advice. App providers may prioritise continued interaction with their service rather than improved health (and so disengagement from the app).
Putting tech development first may also prioritise the most profitable aspects of healthcare. The experience of people who are willing to pay to make their healthcare more convenient and comfortable may be enhanced ahead of ensuring that healthcare is available, accessible, acceptable and good quality for all.
An approach to digital health that is dependent on increasingly intrusive data collection will also affect people’s autonomy. People may be forced into uncomfortable decisions: being asked to constantly provide intimate details of what they eat, how they exercise and where they socialise to their healthcare providers via wearables, apps or other tracking technologies. Insurers or other private companies may also seek access to this data.
At the same time, your very actions themselves may be influenced and restricted by algorithmic demands. Healthcare providers might start commenting about how frequently you exercise, or the chocolate and wine in your (online) supermarket trolley, as just one example of how future scenarios might turn out.
Those who already have less freedom over their lifestyles because of existing socioeconomic inequalities will have the hardest choices. Choosing increased healthcare premiums, lifestyle changes or surrendering more data are harder choices for those with already less agency and a greater fear of oppression. One of three unpalatable options must be taken: a cost to your privacy, your wallet, or your health.
But human rights are not trade-offs or luxuries. They are not only for those who can afford to enjoy them, and you should not have to plug into the datasphere in order to access healthcare.
Because good health is so central to who we are – and looking after others so emblematic of our humanity – how we organise healthcare has big implications for society. The creation of modern welfare states and universal healthcare came alongside the development of human rights as international moral and legal standards after the atrocities of the Second World War. These movements were based on the promotion of solidarity and of dignity: that everyone deserves a good life, and that we can achieve that by supporting one another.
Putting too much reliance on digital tech may mean subordinating ourselves and our future prospects to unaccountable algorithms that cannot know what it is like to have an ill child or to be afraid in a pandemic. The undeniable physical reality of health demonstrates that data alone cannot define us and how we (ought to) live. Human contact matters in healthcare. This is true whether privacy is respected or not.
Staying in shape and eating nutritious food should not become a reaction to a reminder from your health insurance company that next month’s insurance premium is about to go up. There might be pure pleasure and personal gain to be found in them instead. Secretive ‘black-box’ decisions and impenetrable algorithms should not intervene in how you relate to your body and how you interact and connect with others and your community.
Technological research and innovation has always been at the forefront of medicine. The drive to design new tools and gain deeper understanding about what makes us ill – and what makes us better – is instinctual. To try to hold back that tide would be futile and counter-productive.
But delegating care of our bodies and our minds to Big Tech’s vast databases and silent algorithms is risky. It risks those who stand to be discriminated against because of their health status in particular, but it risks us all heading into a world where our autonomy is curtailed and our humanity diminished. We must avoid any scenario that differentiates who gets to enjoy which rights and at what price. Human rights belong to us all equally, based on what we share as humans.
Human rights are also indivisible and interdependent. That means that they cannot be traded off, and instead are mutually strengthening. Realising everyone’s right to health can only be done by respecting their right to privacy. Other approaches misunderstand the complexity and diversity of being human.