Photo by Hannah Busing on Unsplash
Health data is increasingly seen as a highly valuable commodity to be traded and collected by companies to fuel their analytics or advertising.
Photo by Hannah Busing on Unsplash
Health data is increasingly seen as a highly valuable commodity to be traded and collected by companies to fuel their analytics or advertising.
Health data represents one of the most valuable types of personal data available to companies, whether this be for the training of AI (it is worth noting that the AI health care market is estimated to reach a value of around $187bn by 2030, the development of digital health technology (such as wearables, estimated to be valued at around $76bn by 2030) or informing advertising strategy (for example, pregnancy data is estimated to be over 200 times more valuable to advertisers than age, gender or location data). It is clear that there is a strong commercial incentive for companies to collect and use health data, perhaps more than any other type of personal data collection.
The power and control these companies have over our sensitive data, and the rapidly growing nature of the industry, makes taking action against it difficult. This can leave affected data subjects feeling powerless and exposed.
But there is some hope. This article examines an emerging trend of pushback within the US against companies via the use of class actions as a legal route to seek redress for the exploitation of health data. These actions are not only impactful for affected data subjects, but are gaining popularity among commercial law firms and litigation investors, who view the cases as lucrative investment opportunities given the huge amounts of money companies may have to pay when they lose, a portion of which is typically given to the lawyers or investors supporting the affected data subjects. The potentially extremely high damages awards available in such litigation may mean that companies are compelled to change their behaviour or risk losing millions of dollars of revenue compensating victims if they lose the case.
Class actions are a type of civil lawsuit where a group of people who claim they have been harmed in the same way or by the same entity sue the perpetrator collectively. For example, where a group of people are all purchasers of a defective product that causes them harm, they may sue the manufacturer. If successful, the manufacturer could be liable to compensate the entire ‘class’ for the harm suffered as a result of the defect. Even where the cost of compensation for an individual is low, when multiplied by the large number of affected individuals in the class, the costs to the defendants can be substantial if they are found in violation of the law.
While class actions can be used in the UK and the Europe, they are far less common than in the United States (US), where there is a history of class actions going back over 200 years (see, for example, West v Randall, 29 F. Cas. 718 (R.I. 1820)). The prerequisites for a class action in the US are established in the Federal Rules of Civil Procedure, which essentially state that a class action may only be pursued if:
(1) the class is so numerous that it would be unrealistic to have each person file their own claim or for every person be joined as individual claimants in one case;
(2) there are questions of law or fact common to the class;
(3) the people chosen to represent the class must have claims that reflect the experience of the rest of the group; and
(4) the representative parties will fairly and adequately protect the interests of the class.
A second divergence between European class actions and those within the US is how the class is typically composed. Within Europe, most class actions are undertaken on an ‘opt-in’ basis. This means that each affected individual must take proactive steps to join the action. By contrast, in the US, class actions are typically conducted on an ‘opt-out’ basis. This means that representatives of a class can bring a claim on their behalf without the mandate (or sometimes even the knowledge) of many members of that class.
As a result, the size of opt-out classes can be significant. They will often be composed of all individuals who accessed a service or purchased a product within a set period of time. Given the potentially huge number of class action claimants, the costs involved for defendants, should they lose, can also be substantial.
So why are class actions relevant to the protection of privacy? The answer lies in data protection. What’s special about the emerging trend of US data class action cases is they have not been used as a form of redress for data breaches. Instead, they target how companies are processing data as part of their day-to-day business activities. Data breaches have a well established history (in Europe and the US) of compensating victims, but what about when the data is not lost or not exfiltrated by malicious actors? What about when the actual collection of the data, sometimes by reputable companies, violates the law or people’s reasonable expectations of how their data is being used? Well, until recently it has been very difficult to compensate those victims, but as the use of data class actions gains steam, it increasingly appears to be a powerful method that affected individuals can use to hold reckless companies accountable.
Where organisations have unlawfully collected or processed data, there will typically be a common class of affected individuals. Data processing for profit will typically only be commercially valuable to companies when conducted at scale, which naturally creates a ready-made class of prospective claimants with a common harm when something goes awry with that processing.
This is important for two reasons: firstly, it means that affected individuals are provided some level of compensation for the violation of their data protection rights and privacy. Beyond providing a benefit to the individual, this reinforces the idea that privacy and data protection are tangible and the harm is no less important than one caused by a material defect. Secondly, it holds privacy and data protection abusing companies to account in a real way. The size of the damages from these actions means that companies cannot treat the actions as a compliance ‘speeding ticket’ in the same way they may treat regulatory fines (which can be limited and infrequent). The massive payout also means that class actions are frequently well-resourced as large litigation firms and litigation funders view them as lucrative opportunities to make a sizeable amount of money (as the firms and funders will typically take a percentage of the total compensation paid when the claim is successful). This is significant as it means that larger companies with deep pockets are less able to ‘bully’ smaller claimants out of the claim by extending the length of legal process, for example by continuing to appeal the case, to the point that smaller claimants can no longer afford to continue. For these reasons, class actions may be an important tool to combat the ever-increasing expansion of data exploitative practices.
However, class actions and data protection have had an inconsistent and rocky relationship to date. Within the European context, despite having strong data protection laws through the GDPR, the lack of established precedent (and the favouring of opt-in class actions which are much harder to organise) has made it difficult for class actions to be used consistently to remedy data protection wrongs. In the US, despite having a lengthy and established opt-out class action regime, there is no overarching data protection regime like the GDPR on which class action claims can be based.
But that may be changing. Over the last few years in the US, there has been a steady emergence of putative data protection class actions on the basis of new state data protection laws and long-standing privacy (wiretapping) legislation, previously unused in the data class action context.
Given the increasing digitalisation and proliferation of data-oriented services, and growing concern that users have about the misuse of their data, the prevalence of these type of actions is only likely to increase as a means of redress.
Within the US, patient health information (PHI) is protected federally under the Health Insurance Portability and Accountability Act (HIPAA) which establishes rules for the use and disclosure of PHI. HIPAA only applies to ‘covered entities’ such as health care providers and hospitals and not independent businesses such as providers of health care apps (unless they share PHI with a covered entity). HIPAA also has no private right of action meaning that establishing a direct class action claim under HIPAA is not possible. Nevertheless, HIPAA has arguably influenced the baseline expectations that US citizens have for the privacy of their health data that is not shared for other types of data. A 2022 American Medical Association survey found that 92% of US patients surveyed believed that their health data should not be available for purchase by corporations and 94% believed companies collecting or using that data should be held accountable by law.
While US citizens’ views around privacy and the acceptability of sharing health data will be varied, it is clear that there is growing concern around the use of this data outside of the protections of HIPAA. In a Pew Research Center survey, 62% of participants expressed concern about data being shared without the protection of HIPAA.
Yet despite this concern, sharing outside of HIPAA continues, motivated by the profits associated with the data. So, if citizens have no federal health law they can rely on to litigate the miuse of their data, what can they use? The answer is that many claimants are creatively interepreting decades-old federal and state privacy laws originally enacted for purposes like stopping wiretapping or looking to newly introduced state legislation, created to address the public’s growing concerns around the misuse of this data.
As a result of the above, health data class actions have the potential to capitalise on the intersection of: companies’ profit-driven collection of health data; a zeitgeist of distrust around the commercial collection of health data; and the means to litigate the abuses of health data via existing state laws (or newly enacted laws). Below we provide examples of heatlh data class actions cases in the US to give some indication of how developed this trend currently is.
The US states of California, Illinois and Washington provide different examples of approaches to the protection of health data and health-related data through class actions.
In California there have been a range of different health data-related class action cases composed of alleged violations of various different laws, most notably:
MG v Therapymatch (original complaint here) is arguably responsible for increasing the perceived potential of health-data-related class actions in California. Along with Shah v Capital One Financial Corp., these decisions have signalled an expansion of the scope of the CCPA’s private right of action to include unauthorised disclosure of data, where it had previously been limited to data breaches.
In 2023, a group of individuals launched a class action alleging that Therapymatch inc., a network association provider for mental health professionals (conducting business under the name ‘Headway’), had been sharing private sensitive medical information collected from users with Google, via Google analytics, without their knowledge or consent.
In order to match individuals to mental health professionals, Headway’s website prompted users to provide personal details (e.g. name, address, health insurance provider, employer) and also to specify their mental health concerns. The claimants allege that Google interecepts this personal information for analytics to improve its own services and to provide marketing (including targeted advertising).
In addition to the harm caused by subverting user expectations when extracting this sensitive data, the use of mental health data for advertising can lead to adverts being targeted at individuals on the basis of their vulnerabilities or habits, leading to predatory practices by advertisers.
Headway have filed multiple motions to dismiss the claim, but these have all been unsuccessful and, at the time of writing, the claim is ongoing. Critically, one of their first motions to dismiss claimed that the allegations in relation to the CCPA did not apply as a transfer of data to Google would not be a data breach. Encouragingly, the court denied the motion to dismiss, on the grounds that: ‘the defendants disclosed the plaintiff’s personal information without his consent due to the business’s failure to maintain reasonable security practices’ and that a data breach was not required for the claim to survive.
The parties have agreed to mediate the case before the end of the year. As it stands, the claim alleges violations of the CIPA, CMIA and the CCPA, despite Headway filing multiple motions to dismiss the CCPA claim. Should the class action claimants be successful, MG v Therapymatch may be an important precedent in significantly expanding the scope of the CCPA as another avenue for class actions to address the unauthorised disclosure of health data to third parties.
Frasco v Flo Health (original complaint here) is a recently concluded class action case that demonstrates another potential emerging avenue for health data class actions. It involved the creative interpretation and application of older privacy statutes, none of which were health data-focused. The case was originally made against four different defendants, but Flo Health, Flurry and Google all settled with the plaintiffs before the conclusion of the trial, leaving Meta as the sole remaining defendant. The claim against Meta was on the basis of the pre-internet legislation, CIPA, alleging that Meta had used an electronic recording device to ‘eavesdrop upon or record’ a confidential communication without the consent of the communicating parties.
Flo Health is a developer of a menstrual cycle and pregnancy tracking app. The app uses SDKs (Software Development Kits), a set of digital tools that allow developers to incorporate features into apps. In this case, the plaintiffs claimed that Meta’s SDK was being used by Flo to utilise Meta’s analytics features in the Flo app and in exchange, sensitive menstrual cycle/ pregnancy data was transferred to Meta. The claimants argued that the transmission of this data was akin to a confidential communication, and Meta’s collection, eavesdropping. The claimants alleged that that Flo improperly shared sensitive health information to these companies via these SDKs.
It is no surprise that Meta saw value in the collection of users’ reproductive health data in this case. It boils down to the potential for consumer profiling – menstrual cycle data provides information on exercise, diet, sexual preferences, hormone levels. These apps are also typically downloaded by women trying to get pregnant so just by downloading it, it starts to build a profile of the user. If you wish to learn more about menstrual cycle apps and some of the risks around reproductive health being shared, you can read more here.
The trial concluded with a jury finding that Meta had violated CIPA by eavesdropping on the Flo app’s users without their consent. Counsel for the claimants have stated that they believe the collective compensation damages for eligible class members could be in the billions. Meta has stated that they disagree with the verdict and are expected to appeal.
The case is a significant indicator that CIPA can be relied on to challenge large tech companies conducting this type of data collection. It has led to a finalised settlement of almost $60 million dollars, and could additionally lead to a huge compensatory damages payment for the claimants which may herald a new wave of CIPA-based class action lawsuits.
A separate set of data-protection-related class actions in Illinois has been brought under two pieces of similar legislation: the Illinois Genetic Information Privacy Act (GIPA), enacted in 1998, and the Illinois Biometric Information Privacy Act (BIPA), enacted in 2008. BIPA prohibits private entities from collecting, capturing, purchasing, receiving through trade or otherwise obtaining an individual’s biometric identifier or information unless: (1) the data subject has been informed in writing; (2) the data subject has been told the purpose and length of collection and storage of the data; and (3) the collecting organisation has received written release from the data subject. GIPA similarly prevents the use or disclosure of genetic data without patient consent, unless one of the health-related exemptions stated within the act applies.
Both acts contain a private right of action. Given the close connection between biometric, genetic and health data, many of the same players, themes and circumstances present in health data class actions apply to BIPA and GIPA claims.
The Rivera case began in 2018 (original complaint here), where the claimants alleged that Google had violated BIPA by collecting, storing and using users’ biometric data without their knowledge or consent through the use of a facial recognition tool within Google photos. The claim was originally dismissed by a federal court for lacking concrete ‘injury in fact’. The dismissal decision was overturned in an appeal to the Illinois Supreme Court, which found that procedural violations could constitute suficient harm. Ultimately the case was settled in 2022 for USD $100 million.
In 2015, claimants launched a class action (original complaint here) against Meta (at the time, Facebook) alleging that Facebook’s ‘tag suggestions’ feature, which used facial recognition technology, was collecting their biometric data without their knowledge or consent. The claimants argued this violated BIPA. Facebook attempted various approaches to dismiss and appeal the case, but these were all unsuccessful and ultimately the case was settled in 2020 for USD $650 million. At the time, this was considered to be the largest privacy class action lawsuit settlement to date.
While the collection of biometric data has implications for privacy, and such data is certainly classified as sensitive data, it is not necessarily classied as health data. However, data collected for biometric purposes, such as facial images, ocular images or voice recordings may provide clues about health conditions. Furthermore, many types of biometric data may inherently reveal health conditions (e.g. scars or abrasions on a face) making the data both health and biometric even though it may be collected strictly for biometric identification purposes.
The bottom line is that biometric data should not be collected without individuals’ consent. And where biometric data also reveals health conditions indviduals may feel insecure about, it is all the more reason that companies should ensure the individuals are aware and agree to its collection.
While fewer in number, there have also been several GIPA-related class action lawsuits. These have largely been prompted by the ongoing case of Melvin v Sequencing (original complaint here), when in 2023, the court granted the claimant’s motion to certify a class in relation to their complaint that Sequencing LLC had violated GIPA.
Sequencing LLC runs a website where individuals can upload DNA test results to discover information about their ancestry, health and fitness. In addition to uploading the DNA test results, users are also prompted to upload other data including their name, email, address and in some cases age and weight. The claimants allege that Sequencing LLC is disclosing users’ genetic data to unknown third party developers without users’ consent or knowledge in violation of GIPA.
At the time of writing, this case remains ongoing but the certification of the action as a class complaint has already indicated that other similar class actions on the basis of GIPA may be pursued. Similarly to BIPA, while this data may not always strictly be health data, it is easy to see how often sensitive health data may nonetheless be transferred recklessly, particularly where genetics reveals information about genetic disorders or conditions.
Washington State is developing new law specifically targeted at the issue of data exploitation in the digital health space. In 2023, the My Health My Data Act (MHMDA) was enacted. It requires that any legal entity conducting business in Washington or targeting individuals in Washington abide by certain rules around the use and sharing of consumer health data. It also requires that any use of consumer health data only be done with consumers’ consent or by necessity, in a similar fashion to the requirement of lawful bases under the GDPR. The defintion of consumer health data is broad, and includes physical, mental, genetic and biometric data, and critically, it also includes precise location data that may indicate a consumer’s attempt to acquire or receive health services. Finally, the MHMDA creates a private right of action for consumers to sue companies for violation of the act, (see section 11 of the act) paving the way for class actions as a result of MHMDA violations.
The Maxwell case (original complaint here) began in February of this year and appears to be the first example of a private right of action under the MHMDA being used. The class action claimed that Amazon had, without authorisation, collected user location data via location-based apps equipped with SDKs. The claim stated that sensitive consumer health data, including biometric information and precise location data, had been transmitted to Amazon for the purposes of targeted advertising without the consent of consumers in direct breach of the MHMDA.
Unfortunately, it’s unclear exactly what Amazon’s legal response to the claim would have been. In April of this year, the claim was consolidated with a group of other tortious claims against Amazon Advertising, and in May, it appears as though Maxwell voluntarily dropped their claim against Amazon. Whether this was the result of a private settlement agreement or some other reason is unclear.
Nevertheless, there remains potential for future class actions to be brought on a similar basis in Washington State. As was evidenced in the Maxwell claim, given the wide scope of the MHMDA, the potential class of claimants could be very large (in the case of Maxwell it prospectively included tens of millions of Americans).
The cases discussed convey the potential for class actions to become a powerful tool for users to seek redress and compensation for the violation of their privacy and misuse of their data. As evidenced by many of the cases, the total compensation figures can be substantial and, if this trend continuesl, are likely to compel companies to better respect users’ expectations around their data privacy or risk debilitating costs.
As the data class action trend in the US continues to gain steam, it may establish a patchwork of templates for holding companies accountable, connected by the through line of user expectations of health data privacy. However, it may also represent something beyond an effective means of securing compensation. It may represent a demand from the US public for the establishment of new laws, perhaps even a federal health data protection law, capable of preventing companies’ misuse of their health data without dependence on state specific laws or creative interpretations.
We have already seen that in the wake of Washington’s MHMDA, Nevada passed the 'Nevada Consumer Health Data Privacy Law’ and Conneticut amended their data privacy law to cover consumer health data. These laws may represent the first responses to a wider public call to action for greater protection of health data in response to companies bad practices. If true, then class actions will remain an invaluable tool for citizens to voice their concerns about these practices, and that lawmakers would be wise to pay attention to.
Finally, looking beyond the US, there may be lessons that the US’ emerging trend of health data class actions can pass on to other jurisdictions. Continued successes for US-based class action litigants may lay the groundwork for the adoption of collective redress mechanisms for health data exploitation more widely, even in jurisdictions which have been legally resistant to the concept in the past. By examining the successes, limitations, and tactics of these class actions, we can begin to understand how collective legal action might evolve as a tool for privacy protection worldwide.