Export controls and the implications for security research tools

News & Analysis
Export controls and the implications for security research tools

Update:

After an initial discussion with technical and government experts involved in drafting and negotiating the new controls on “intrusion software”, some of our initial questions have been clarified. To read what they had to say, go here.

One of the major dangers of imposing export controls on surveillance systems is the risk of overreach. While you want the scope of the systems being controlled and the language to be wide enough to catch the targeted product and its variants, you also need the language to be specific and detailed enough to ensure that no items get inadvertently caught at the same time.

Getting this right is acutely important for security researchers. Export controls can represent a problem for security researchers because it is often difficult to differentiate between legitimate research, products used to test defences, and activities and products that are used to actually penetrate them without consent.

Security researchers need to be able collaborate with one another, across territorial boundaries, and they also need to be able to share their work and problems. The outcome of such research should not be penalized; responsibly disclosing vulnerabilities in hardware and software for example or the tools used to discover them, should never become subject to export controls.

Implications

Discussions between Privacy International and export control officials involved in drafting the new controls suggest that it was never the intention of these new controls to catch legitimate security research tools and that efforts have been made to prevent them from being subject to controls. On the face of it however, there are still areas to be worried about in the new agreement. 

As is standard throughout the Wassenaar control list, it is not only finished items themselves that are subject to control, but also any software and technology that is used to produce or operate them. The new controls on intrusion software therefore also includes controls on:

"Technology"1 for the "development"2 of "intrusion software"
"Software" specially designed or modified for the "development" or "production" of equipment or "software" specified by 4.A. or 4.D.
"Technology" according to the General Technology Note, for the "development", "production" or "use" of equipment or "software" specified by 4.A. or 4.D.

Although unintended, these controls could also catch some legitimate security products.

There are of course exceptions; software and technology in the public domain is exempt (more on that later), as is technology for "Basic scientific research" – defined as
”Experimental or theoretical work undertaken principally to acquire new knowledge of the fundamental principles of phenomena or observable facts, not primarily directed towards a specific practical aim or objective.”

There are specific technologies that are exempted from controls as well; DRM software is unsurprisingly included in this category, as are “Hypervisors, debuggers or Software Reverse Engineering (SRE) tools”, in addition to software to “be installed by manufacturers, administrators or users, for the purposes of asset tracking or recovery.” It is unclear at this stage what conversations were had that led to expert group deciding to exclude debuggers and not explicitly security research products.

The important issue now is how this category is interpreted and implemented. It is our understanding that export control authorities did not want to catch security research tools and may well explicitly or implicitly exempt security products at a national level. In the UK, for example, exporters can apply for a Control List Classification enquiry to determine whether or not a product is subject to control, a process that takes into consideration the original design purpose of a product. Export licensing authorities, and particularly enforcement officers within customs, do not want to create unnecessary work for themselves if it serves no legitimate purpose. It is also important to remember that while some products may be caught under this category, it is still up to prosecutors to decide whether or not to pursue a case if there has been any infringement.

It’s still early days following the publication of the agreement, and the scope of the consideration given to security research tools remains untested. What’s important now is to establish the extent of the safeguards put in to prevent overreach; Privacy International and others will be doing a number of things before these controls are implemented:

We will pursue outreach with governments and the expert groups involved in the discussions to ascertain what thought was given to security research products throughout the process
We will consult licensing authorities to find out if they intend to control security research products within the new categories
We will campaign vigorously against the control of any such products and ensure that category 4 is implemented across member states in such a way as to not catch security products
We will initiate conversations with the security industry to ascertain their understanding of the new controls and how it affects them

We’ll keep you posted.

Update

After an initial discussion with technical and government experts involved in drafting and negotiating the new controls on “intrusion software”, some of our initial questions have been clarified. Given that the UK is currently in the midst of a considerable and costly bid to position itself as the lead developer and exporter of “cyber security” products in the world, it is clear that efforts were made to ensure that legitimate businesses, researchers and their activities were safeguarded. It would be extremely surprising if Government had not considered the effects of these controls on business activity that it is investing £860 million over 5 years trying to promote.

The main point is that the new language doesn’t control “intrusion software” per se, but rather the software and technology used on servers to disseminate it. In other words, the controls aren’t aimed at the malware and rootkits that actually infect a device, but on the actual software used to create, deliver and instruct them.

So although the actual definition of “intrusion software” is fairly broad:

"Software" specially designed or modified to avoid detection by 'monitoring tools', or to defeat 'protective countermeasures', of a computer or network capable device, and performing any of the following:
a. The extraction of data or information, from a computer or network capable device, or the modification of system or user data; or
b. The modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.

What is being subjected to control is actually:

4. A. 5. Systems, equipment, and components therefor, specially designed or modified for the generation, operation or delivery of, or communication with, "intrusion software".

4. D. 4. "Software" specially designed or modified for the generation, operation or delivery of, or communication with, "intrusion software".

4. E. 1. c "Technology" for the "development" of "intrusion software".

"Software" specially designed or modified for the "development" or "production" of equipment or "software" specified by 4.A. or 4.D.

"Technology" according to the General Technology Note, for the "development", "production" or "use" of equipment or "software" specified
by 4.A. or 4.D.

Essentially, what this means is that it is software and technology that is used to create and control malware and rootkits that is being targeted, not the malware itself. The rationale behind this is that targeting the malware would inhibit the sharing of malware samples between researchers and antivirus companies.

If an item is to be controlled, it needs to be specially designed with the intention of actually developing, delivering or instructing “intrusion software”. This would mean that products such as 0-day exploits would not be controlled under the new language, because merely having potential to be useful in this regard would not place an item under control. Under similar reasoning, vulnerability reporting would not be caught and neither would such things as software used to jailbreak iOS devices. Further, products such as those produced by Nessus would not be controlled.

One of the big outstanding areas of concern revolves around the implications of the new controls on products used in penetration testing. Such software and technology is used to develop applications such as Metasploit, which can be used to produce exploits. As discussed here, software and technology in the public domain and software generally available to the public is exempted from control along with open source and free software. It is this kind of product that smaller and independent researchers use, and it remains to be seen whether any large developer of penetration testing products will now be subjected to new controls.

This is, of course, the intention of the controls; how they are to be interpreted and implemented will rest with the participating states. The fact that an item has to be “specially designed” to carry out any of the above gives some flexibility to the interpretation of the new controls. There is no definition of what constitutes “specially designed” in most circumstances and countries, making it an issue that has caused considerable confusion among exporters and participating states (the US even has a whole online platform dedicated to finding out what constitutes“specially designed”). The language does however appear to be defined enough to avoid any misinterpretation.

Need for engagement

Those that have been calling for effective export controls to be put in place to safeguard human rights against surveillance have also been campaigning against ineffective ones that harm legitimate industry and contribute to the violation of human rights. Now more than ever, it is clear that finding the right balance requires a comprehensive, multi-stakeholder approach that involves not just governments, but civil society, industry, academia and other interested parties as well.

We will continue to monitor this space and with our engagement activities with other governments and the industry. While it’s somewhat reassuring to know the intention behind these controls, the lack of engagement on these issues from governments prior to now is clearly concerning.