
Analysis: What does this all mean?
Read our analysis of our research findings, including the limitations of the method, the use of advertising and analytics SDKs, other third-party developer tooks, content delivery networks, non-local storage, data minimisation, and the future of privacy.

Go back to the full report page
Limitations
Before our analysis, we note the technical limitations (and the scope of our research) meant we did not test certain features mentioned, such as Google Fit integrations offered by some apps.
We also mention the limitations of our DIAS environment, which only allows us to see web (client-side) interactions, rather than server-side interactions, the latter of which are increasingly common among more advanced platforms that utilise cloud computing (e.g., server-side operations of OpenAI’s ChatGPT for WomanLog's chatbot). There may be some data-sharing activity server-side that our DIAS is unable to see and that can only be revealed by the developers themselves.
Analysis: What does this all mean?
Compared to our research in 2019, this time around we did not see instances of user’s personal data about their cycle being sent to Facebook for the period tracking apps. However, in the web traffic of several apps we investigated, we found a significant number of third parties, from advertising software development kits (SDKs) to third party development tools.
Advertising and analytics SDKs
Throughout our investigation, we found at least some, if not many, appearances of third-party URLs in the apps' web traffic. Most apps appeared to integrate some form of an advertising SDK (e.g., Google Ads, Facebook Ads) or an analytics SDK (e.g., Firebase, AppsFlyer). While the web traffic exchanges with these third parties did not appear to include the user’s period data, they did share significant technical identifier data about the user's device, such as the device they were using (e.g., ‘redroid_x86_46’, which was the virtual Android we used in our experiment) with associated fingerprint data (see Figure 3.22 in the ‘EN’ locale for one of many examples). This automatic collection of device data was disclosed in all the apps' privacy policies, all of which mentioned that device identifiers and IP addresses would be automatically collected through the use of the app.
There are two privacy issues we raise in relation to technical device data being automatically collected and shared with third parties: 1) the device data's identification properties, and 2) the way third parties might use this potentially sensitive data.
By definition, aggregate data about a device like its model or dimensions would not qualify as personal data under data protection laws. However, there are special cases in which device data may be considered as personal data, for example if the device becomes uniquely identifiable to a user via fingerprinting, which is a method that combines several different attributes of a device (e.g., screen resolution, IP address, operating system, device ID) to identify the unique device. While these individual pieces of data are not uniquely identifiable on their own, the probability of two different devices having the exact same combination of these attributes is statistically low, which thus makes it possible to identify a device. We note that not every app we looked at extracted such a wide web of information about the user's device, but some of them collected a fair amount of device data that was shared with third parties beyond those disclosed to users in the Privacy Policy.
Third-party Firebase appeared to collect device-related data like the device model, operating system and country of use for the purposes of app analytics. Analytics platforms typically generate aggregate analytics in order to provide developers insight into their app engagement (e.g., how many Android users downloaded the app, how many app crashes were reported in a specific country, etc.). An interesting scenario arises when we consider that Firebase Analytics is a Google-owned platform. The majority of global Firebase services run on Google infrastructure, which means the data could be processed by Google Cloud or Google data centers and Google could, in theory, use this data for its own purposes. Firebase's Privacy Policy discloses that 'Firebase Service Data', or personal information (excluding customer data) that ‘Google collects and generates during the provision and administration of the Firebase services’, might be used by non-Firebase Google services, such as to 'understand your use of Firebase and other Google services'. Firebase clarifies that customers (developers) do have the option to control whether their Firebase Service Data may be used by Google, and the Firebase Service Data that may be used for non-Firebase Google analytics is unlikely to be personally identifiable data. Regardless, the conclusion we draw here is concerned with the fact that device data pulled from Firebase through the use of its period tracking app clients could potentially be processed beyond the app itself and even beyond third party partner Firebase.
Overall, we were pleased to see that third party SDKs did not appear to be collecting period input data while we used the app (as far as our DIAS environment has been able to see). However, it is nonetheless worth noting that some of the observed third parties like Firebase are involved to some degree in the handling of troves of user device data unless the app developers manually configure these sharing settings.
We also note that some apps' privacy policies were more transparent than others (refer to our Findings for specific apps); several apps merely noted generic disclosures that 'third party service providers' might receive device data and other analytics data. And, where privacy policies did name third parties, there were often inconsistencies in what third parties appeared in the web traffic and which were disclosed by name in privacy policies (e.g., Google Ads appeared in the web traffic for the Maya and WomanLog apps, but it was not explicitly named in their Privacy Policies).
Other third-party developer tools
Several apps also outsourced parts of their app functionality/development to third parties, such as to Facebook's Graph API (Maya, Wocute), push notifications integrations with OneSignal (Stardust), onboarding and authenticating users via Rownd (Stardust), health integration options with Google Fit (WomanLog) and AI features supported by OpenAI (WomanLog). Some of these apps linked their Privacy Policy somewhere on the landing page, but users nonetheless were not explicitly told within the apps themselves that their data (e.g., timestamps of opening the app, device data, onboarding information, chatlogs) would be shared with or processed by the specific third parties we saw in the web traffic. Additionally, only some of these third parties were explicitly named in the privacy policies or website, if at all (the privacy policies for Maya, Period Tracker by GP Apps, Wocute, and Stardust mentioned the use of third parties but did not mention by name the third parties we discovered in the web traffic). While such disclosures are not strictly required under GDPR, it certainly would be considered good practice to do so.
The outsourcing of certain app functionalities to third parties is concerning, as there is a lack of clarity around what data is being accessed or stored by these third parties. For instance, there may be a question of where WomanLog's chatlogs are stored and the potential privacy risks. Our findings above show that each chatlog in the WomanLog app was assigned a unique 'chatID', which suggests it may be stored according to that ID for some sort of referencing or for a certain time. This raises potential concerns for example if a user mentions in the chat that they've missed their period - might this become incriminating information that an app could potentially hand over to law enforcement as part of its profile on the user? It may become increasingly popular for apps to leverage AI to entice users to a more ‘advanced’ personalization experience, at the risk of potentially allowing this data to be handled by third party AI companies. Consequently, it is crucial to scrutinize the privacy ethos within these business partnerships and who holds access to what data.
It's worth noting that the outsourcing of certain functionalities like user authentication to third parties can be for the purposes of enhancing privacy, not diminishing it. Stardust has asserted that its use of third party Rownd for user authentication is for anonymising user’s data and de-identifying users. In theory, according to Stardust’s explanation on its website, by separating the user’s personal account log-in data (stored by third party Rownd) from the user’s period input data (stored by Stardust), Stardust cannot associate the unique user with their input data. As we'll discuss below, this is an interesting new approach to protecting privacy while still collecting and storing user data non-locally.
Nonetheless, we emphasise that involving third party deployers in any case – even in Stardust’s case as a privacy-enhancing feature – means a higher threshold of privacy protection and accountability because there are now more parties involved that are processing user data.
Cloud-based content delivery networks (CDNs)
So far in our analysis, we've discussed the direct sharing of data with third party SDKs and deployers. A different category of entities we observed is cloud-based content delivery networks (CDNs), which appeared to facilitate the delivery of user data between the apps and the third parties mentioned above, as well as between apps and their APIs.
Cloud platforms are computing services (e.g., networking, data storage, etc.) that operate over virtual cloud servers. Cloud-based CDNs are functionally 'middle-men' that facilitate Internet requests between computers and servers over virtual servers (e.g., requesting specific images or icons to display in the app). Cloud-based CDNs, like Cloudflare, are becoming increasingly common as websites and apps grow their capacity and userbase. Cloud servers are highly scalable to accommodate for growing infrastructure needs that today’s apps increasingly require (e.g., a large number of users sending requests to an API all at the same time).
For many of the above apps, we saw cloud services either facilitating calls between apps and the third parties they're integrating (e.g., Aliyun for Wocute, Cloudflare for Rownd) or communicating user input data first-party across the API (e.g., Cloudflare for Flo).
In its response to our findings, Cloudflare clarified that its 1.1.1.1 public DNS resolver does not retain any personal data about requests made, and that its Oblivious DNS over HTTPs (ODoH) separates IP addresses from queries ‘so that no single entity can see both at the same time’. In theory, according to Cloudflare’s response, this approach ‘allows for legal compliance without detracting from Cloudflare’s or the app’s delivery of services’. On top of this, Cloudflare also stated that it does not access the data being transmitted in its everyday operations, nor is it possible for them to turn over any content transiting across their network anyway, as it would not be technically feasible to do so.
Most of the apps' privacy policies did not mention that user data would be processed by and passing through cloud-based services. Apart from Flo, all other apps that used cloud services did not explicitly mention the cloud service, and some did not even mention the use of cloud services at all. It is crucial that users be informed of all the entities that are processing their data at each stage.
Cloudflare's Privacy Policy clarifies that it is the responsibility of the business customers themselves (the apps) to 'establish policies for and ensure compliance with all applicable laws and regulations, including those relating to the collection of personal information, in connection with the use of our Services.' Cloudflare establishes its position as merely a 'conduit of information', thus the period tracking app itself, not Cloudflare, is responsible for what user data is being collected and passed through the server. As a man-in-the-middle proxy server, Cloudflare only handles the traffic that their customers elect to send across the network. And we reiterate that Cloudflare states it does not see or access the data being transmitted, nor is it technically feasible for them to hand over this data anyway due to the complex nature of their security-by-design architecture.
On the note of law enforcement subpoenas, Flo asserts that any data disclosure requests would be sent to Flo as the data controller and not Cloudflare as the data processor. Cloudflare also asserted that it has a long history of pushing back on government surveillance orders of traffic in their network. They state that:
‘Cloudflare has never provided any government a feed of our customers’ content transiting our network or installed any law enforcement software or equipment anywhere on our network…We would legally challenge a request to take those actions if we were to receive one.’
Note that for the apps we tested above that show 'nginx' in their server rather than an explicit third-party server, it is possible the apps are using cloud services at other stages of their delivery that were not observable in the DIAS environment.
Non-local storage
Above, we saw that many apps (Flo, Maya, Wocute, WomanLog, Stardust) communicated user data (device data and input data) to their respective APIs, even in cases when the user did not create an account on the app. Recall that an API is a connection that allows software to communicate features with each other (e.g., actions on an app communicating with the computer servicing functions for the app). In these scenarios, it is possible that the data being communicated over the web to the API is being stored somewhere. Thus, all the user’s input data we've observed above in our findings could be processed and stored somewhere by the developer. This data might be stored simply as part of a user's profile, or it might even be used for further purposes, such as, in WomanLog’s case, to generate training data 'for predicting menstruation and fertility',according to their privacy disclosure. (We note that OpenAI asserted that it does not use inputs and outputs from deployments of its API Platform to train its models).
Other apps (Simple Design's Period Tracker, GP Apps's Period Tracker and Euki) did not appear to process user input data over the web. Both Simple Design and GP Apps have stated that the data users input about their cycle into the app is stored locally on the user's device only and thus not accessible nor processed by the apps themselves.Euki also clarified that the technical functionalities of the app have been engineered in such a way that the app is not capable of collecting user information in the app.
Although there is nothing inherently wrong with using an API to retain users’ data, it does mean that all cycle input data is leaving the mobile device and being stored off-device on an external server overseen by the app. The significance of off-device (non-local) data storage as opposed to on-device local storage is that this data is not solely in the hands of the user and their physical device, but also stored and processed by the app developers. If complying with a law enforcement subpoena means turning over all the data an app has about one of its users, this could hypothetically mean the app turns over the cycle data requested that’s in its possession. If data is stored locally and only on-device, the app would not have access to the health data about their users and thus would not have the data to hand over that law enforcement might be looking for.
While the most privacy-preserving practice may be to keep user data local to the device, some users may consider this a difficult choice to make for the convenience of backing up their data to an account that is accessible beyond their device in case they lose their phone or switch devices (when data is stored locally on a device, this means the data is not recoverable if a device is stolen or lost). When responding to our findings, Flo claimed that their use of an API rather than device-only storage is that the nature of lower-end Android devices such as those used by individuals in ‘locations where health literacy is low necessitates server-driven features’ in order for Flo’s optimal functioning of the app for these lower-end devices. While this is definitely an important consideration, it would be useful to separate – based on users’ choice - between functionalities that can be delivered via APIs and functionalities that can be device based, associating only the latter with the processing of sensitive data. In any case, users may opt to use Flo’s Anonymous Mode rather than this default mode,which still stores user data on Flo servers but will not store any personal identifiers such that the logged data would be difficult to connect or link to any user.
Storing personal identifiers is an interesting privacy topic to consider, as in contrast we saw a case (WomanLog) where a user was assigned a unique ID ('clientKey') despite not creating an account yet. While we inputted period flow and symptom information without an account, our inputs were being synced to this unique ID. By definition, such an ID is not personal information, but it might be considered so if it can be used to trace back to a unique user with that specific ID. Even when users have not created an account for themselves, they might still be potentially identified and their period patterns traceable if their input data is linkable to a unique identifier.
Data minimisation
One of our recommendations from our previous investigation urged menstruation app developers to limit the data they collect on users, as many apps appeared to request superfluous personal data despite the fact that not all this should be necessary for the purposes the app states (tracking menstruation). While we saw that some apps (Flo, Period Tracker by Simple Design, Period Tracker by GP Apps, WomanLog, Wocute) allowed users to use the app without having to create an account, others (Maya, Stardust) required users to do so in order to use the app. We've discussed above how users' input data can be exploited by apps who store their data whether or not the user creates an account; requiring users to create an account with personal details like emails and names only adds further exploitable personal data to the mix.
Recall for several apps above that the onboarding stage asked the user for their purpose for using the app, such as tracking their period or preventing pregnancy. While this information might be collected for the purposes of delivering the right application dashboard, this metric also carries risks, as it could be packaged with all the other data and future behaviours logged for this individual like when they've missed a period to infer certain conclusions about them without their knowledge. If law enforcement issues a subpoena to an app asking for user data, it is these packages of profile information that could be handed over to potentially criminialise individuals who have been using the app and are being investigated for accessing an abortion where abortion is criminalised or restricted, violating both their right to privacy and their right to health.
Many of the apps we observed also asked users to provide personal information like their name, date of birth (or year of birth) and height/weight information to get started, with some apps even requiring birth year information (Flo, Maya, Stardust). Recall that we'd seen birth year and other personal information logged in the web traffic and sent to the apps' APIs and even in some cases third parties involved in the processing (e.g., Rownd processing this onboarding data for Stardust). Flo clarified that asking for birth year information is for the purposes of verifying the age of its users (only those aged 16 or over may use Flo). They state that asking for birth year but not the uniquely identifiable full birth date strikes a ‘balance between age appropriate design and data minimization principles.’
The Future of Privacy
This brings us to the conversation about the future of privacy for period-tracking apps. On the one hand, menstruating individuals deserve to have technology that can assist their menstruation tracking and health monitoring. On the other hand, this technology in the form of apps may also have their own profit-driven purposes beyond providing health tracking services. Is there a future in which these paradoxes can be resolved?
Above, we observed a few different approaches to privacy from various apps. One such method was allowing users to use an app without creating an account, which helps to keep them potentially anonymous, as their input data may not be easily linked to their profile. However, certain other identifying information like their device information and even unique account IDs assigned to the user nonetheless established a form of unique identification that could potentially be traceable (not to mention that there have been studies that prove how anonymization is not actually entirely impenetrable).
Some apps were configured in such a way that stored user data locally on the device, rather than storing the data on servers managed by the developer and/or third parties. However, this local storage option means that user data is not recoverable because the data is not backed up to any account. This trade-off may not always be preferable for individuals who are utilising a period tracing app for consistent monitoring of their sexual and reproductive health.
We also observed a fairly novel technique deployed by Stardust, which offshored their user authentication functionality to third party Rownd to store and process users' account data (e.g., name, sign-up email, etc.) that Stardust itself cannot access. With this method (the ‘anonymous sign-in’ approach), Stardust can require users to create an account while siphoning off account management to a third party such that the user’s input data cannot be linked to their account, whose identifying data (e.g., name, email) is being managed separately by Rownd. Rownd clarified in their response that their platform is engineered in such a way that strictly isolates identifiable user data from sensitive personal data managed by Stardust, so that the risk of linking identifiable data with sensitive personal health information is significantly minimized.
A fourth privacy method to raise here is not so much something we've observed directly in the web traffic but something we're monitoring as an emerging trend: open-source apps. Non-profit apps like Euki are open source, as is Flo's Anonymous Mode<. Open source means the source code is made available to the public, thus security vulnerabilities might be better observed by more people, and users might also be able to look into how their data is being handled according to the source code. There is plenty of trial and error in the open-source argument, though, and there is no right answer about whether open source is the best case when it comes to apps managing sensitive health information.
It is difficult to say what the future of privacy holds, whether menstruation apps might turn to more privacy-enhancing features and services with a privacy-forward mission due to public pressure or whether the payoffs (and ease) of exploiting users’ data are too lucrative for some apps to sacrifice. Some platforms have stated that they do not support law enforcement overreach for criminalizing abortion, such as Clue and Period Tracker by GP Apps, and Cloudflare has stated that they treat their customers’ privacy with equal force to law enforcement requests. These privacy-forward goals should be the standard and not the outlier. The current regulatory landscape does not enforce enough accountability and responsibility upon apps to pursue better privacy practices; in order for developers to engineer robust privacy in their apps as a default, there must be explicit regulatory standards and safeguards that make privacy attractive to developers and exceptions permitting the sale and sharing of users’ data to third parties should be reconsidered and narrowed.
Read more
- Our Research Methodology
- Research findings from the apps
- Conclusion and Recommendations
What can I do?
If you want to make sure we can keep doing work like this you can donate now to make sure PI can keep holding governments and companies to account.