Accountability
Accountability requires (1) defining the responsibilities of each party in a partnership - identifying obligations, duties and standards, and (2) designing mechanisms enabling third parties to scrutinise and challenge its consequences.
Accountability in human rights law “refers to the obligation of those in authority to take responsibility for their actions, to answer for them to those affected, and to be subject to some form of enforceable sanction if their conduct or explanation is found wanting” (OHCHR). It is a core principle that allows all other principles to be actually enforced against a “duty bearer”. In that respect, states should provide ample space for civil society to be able to observe, denounce and challenge uses of technology that violate or risk violating human rights.
In the context of safeguards for the deployment of PPPs, defining responsibility requires identifying obligations, duties and standards that shall be imposed upon each actor of the relationship – for example through the inclusion of references to recognised codes or tailor-made policies. The challenge is high in PPPs because the state is relying on a private actor, who is not equally bound to act in the public interest, to deliver a public function. Accountability mechanisms must therefore be particularly robust and defined prior to the deployment of a PPP.
Safeguard 11 - Assign human rights responsibilities to companies
When a PPP with potential impact on the enjoyment of human rights is agreed, the state’s obligations to protect against human rights abuses ought to explicitly apply to the company as well. There must be some mechanism to hold the company accountable for any human rights abuses facilitated by its technology and/or services.
States should therefore ensure that the companies they contract under a PPP adopt the provisions of any relevant laws, guidelines, or codes by which the contracting public authority is bound. This should be explicitly provided for in the documentation governing the partnership.
Issue addressed
Public authorities are often bound by specific laws or codes that uphold the state’s human rights obligations, while private companies may not always be bound by these same laws
Example(s) of abuse
- Thomson Reuters data sold to Immigration and Customs Enforcement (ICE), a US agency reported to have separated children from their parents and detained them in horrifying conditions. Thomson Reuters was only able to point to its “Trust Principles” to demonstrate its commitment not to assist human rights violations, rather than a clear commitment to comply with human rights law while providing its services.
Safeguard 12 - Control exports of surveillance technologies
States should control exports of surveillance technologies by assessing the potential for their use for human rights abuses. PPP documentation should append (an) agreed-upon human rights framework(s) which shall govern the partnership and be used throughout the partnership lifecycle for checking human rights compliance of the technology itself and the state’s use of the technology, as well as any follow-up services provided by the company.
Issue addressed
Technologies developed in one country supplied to another country with differing human rights standards
Example(s) of abuse
- Chinese government working with Chinese surveillance firms to develop facial recognition technology standards considered repressive (e.g. incorporating ethnic tracking) – those same technologies are then exported.
- Telecoms companies providing Lawful Intercept telecommunications infrastructure developed for EU standards to regimes with differing or no human rights standards (e.g. Iran's Web Spying Aided by Western Technology).
Safeguard 13 - Purpose limitation through technology use policy
Once a technology is approved for use, a technology use policy should be developed to govern the public authority’s use of the technology that defines clear boundaries for the purpose and use of the technology, with an exhaustive list of authorised uses and a non-exhaustive list of prohibited uses.This would be essential, for example, to comply with the EU’s GDPR principle of “purpose limitation”, which requires that personal data be “collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes” (Article 5(1)(b)). This principle of purpose limitation ought to be more widely applied to any use of a technology that affects individuals’ enjoyment of their human rights.
Once a technology is approved for use, a technology use policy should be developed to govern the public authority’s use of the technology that defines clear boundaries for the purpose and use of the technology, with an exhaustive list of authorised uses and a non-exhaustive list of prohibited uses. Any use of the technology that does not comply with this policy should undergo a new approval process determining whether the new use would be lawful and compliant with other safeguards, and the technology use policy should be amended to reflect this new agreed use. Any new use that is wholly incompatible with the original technology deployment’s purpose should be rejected.
Issue addressed
Function creep – uses of a technology evolve over time without fresh new approval and oversight processes
Example(s) of abuse
- CCTV in France during Covid-19: CCTV cameras used during the Covid-19 pandemic to monitor mask wearing and social distancing in public spaces.
Safeguard 14 - Transparency over companies' internal human rights councils
If companies contracted under PPPs wish to rely on internal, private councils to demonstrate their exercise of due diligence, consideration of human rights, and legal compliance, these councils’ or audits’ deliberations, conclusions and decisions should be made public. These councils should select specific national, regional or international human rights frameworks to adhere with and disclose which frameworks were chosen for which technologies or deployments. Regular audits assessing compliance of the company’s products and services with these frameworks should be conducted, and findings published.
Issue addressed
Companies rely on internal “human rights councils” to demonstrate compliance with human rights frameworks, but these councils are not transparent and are sealed by confidentiality obligations
Example(s) of abuse
- Palantir created the Palantir Council of Advisors on Privacy and Civil Liberties (PCAP) to help them “navigate the European and broader International data privacy landscapes”. The PCAP is advisory only, members are compensated for their time, and its discussions are confidential.
- NSO previously pledged to engage in consultations with human rights experts on its practices, but the identity of experts and content of advice received was never made public.
Safeguard 15 - Algorithmic transparency
Algorithms and other decision-making processes deployed as part of a PPP should be open to scrutiny and challenge – by being auditable (as required by safeguard 21 below). The ability to audit technologies is particularly essential in order to provide adequate oversight and redress (for example, if a technology has led to a result that is later challenged in court or used as evidence, the proper administration of justice requires the technology to be entirely auditable).
As part of the procurement process, the assessment of different systems should compare their levels of discriminatory bias. If discriminatory bias is identified, it should be rectified, and if it cannot be rectified, the technology should not be deployed.
Issue addressed
Reliance on data-driven technologies has been shown to entrench inequalities, inaccuracies and injustice, without providing ability to question the decisions they make or lead their users to make
Example(s) of abuse
- Palantir and vaccine distribution: a proprietary algorithm developed by Palantir has been used to distribute Covid-19 vaccines in the US, creating unexplainable disparities and inequalities in allocation of doses between states.