
Photo by Nils Huenerfuerst on Unsplash
Read our statement below, which raises questions regarding the lack of adequate data protection safeguards, algorithmic bias and discrimination, as well as shortcomings in meaningful human control and judgment.
Photo by Nils Huenerfuerst on Unsplash
The Open informal consultations on lethal autonomous weapons systems, held in accordance with General Assembly resolution 79/62 at the UN in New York on 12-13 May 2025, examined various legal, humanitarian, security, technological, and ethical aspects of these weapons. These consultations aimed to broaden the scope of AWS discussions beyond those held by the Group of Governmental Experts (GGE) at the UN in Geneva. Find out more about what happened during the discussions at Researching Critical Will's AWS Diplomacy Report, Vol. 2, No. 2
PI's statement during the informal consultations in New York, 12-13 May 2025
Thank you Chair and the experts,
I speak on behalf of Privacy International. We are a non-governmental organisation that researches and advocates globally against government and corporate abuses of data and technology. We investigate how peoples’ personal data is generated and exploited, and how it can be protected through legal and technological frameworks. Our questions and concerns relate to data safeguards and interoperability of these systems in different environments.
As underlined in yesterday's discussions, autonomous weapons systems are not a distinct category of technology or weapon. It is the integration of a data-intensive system that renders a weapon autonomous by adding critical functions to it.
Fundamental concerns exist around these technologies, not only in the context of armed conflicts, but also in relation to border control, profiling, and surveillance. Among these are data sources, data protection and privacy, machine learning processes, algorithmic biases, and transparency and explainability. These concerns are not only relevant. In the context of autonomous weapons, they become lethal.
Data-intensive systems that enable autonomy in weapons are based on vast amounts of data for their initial development and also continuous learning phases. There is currently little consideration of which data feeds into these systems and how it is acquired. On the one hand, private actors not directly involved in armed conflicts build data-intensive systems without the consent of data subjects and lend it to militaries. On the other hand, profiling and surveillance tools improved on the data of people in armed conflicts find their way back to civic spaces to threaten human rights and democracy. In each scenario, incentives become higher for mass surveillance in peace time and armed conflict.
A first question is whether there exist adequate data safeguards so that personal and sensitive data of individuals feeding the autonomous weapon system cannot be exploited by adversaries, protecting f populations affected by armed conflicts from further vulnerability.
Relating to our second question, significant and valid concerns have been raised across different sessions on algorithm bias and discrimination. The bias is embedded from the outset with the training data but not limited to it. Emergent bias throughout the deployment and learning phase, as well as biases due to deployment in contexts where the system was not initially designed to function must be taken into consideration. Technologies transferred between civic spaces and battlefields risk exhibiting further biases. Datasets used in the profiling and identification of military targets cannot and should not feed into systems that aim to identify criminal threats, or else, political activists, human rights defenders, and marginalized groups. What measures can be taken to limit data-intensive technologies’ transfer to different contexts?
Lastly, we would like to underline that meaningful human control and judgment is a crucial element of these discussions, and rightly so. However, human supervision cannot address on its own the many shortcomings of the technology, especially in the context of armed conflict and especially when the system in question is not transparent. Inability to intervene without the understanding of how the systems work, or putting too much trust in highly complex systems may prevent the human supervisor to form their judgement. Transparency and explainability of the technology remain essential.
Thank you.