What does it mean when Big Tech goes to war?

Governments are making a hard shift to military AI dominance. If the AI industry builds it, they’ll come for us all.

Key points
  • AI firms are increasingly contracting with military and government clients, but their ethical commitments to user privacy are proving impossible to honour.
  • These consumer-facing companies, whose products are used globally, treat non-American users as acceptable collateral. Mass surveillance enabled by AI threatens everyone, regardless of nationality.
  • Governments must stop using national security law to evade scrutiny, and the public deserves to know how these companies' technologies are being deployed against people.
News & Analysis
Skulls with a glitch effect

Kathryn Conrad / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

AI firms are still struggling to work with the US Department of War/Defence. OpenAI is grappling with its own contract with the U.S. Department of War, amending it to say its systems “shall not be intentionally used for domestic surveillance of U.S. persons and nationals”.

The turmoil follows Anthropic being declared by government to be a supply chain risk, as the firm states that it resisted attempts to allow its services to be used for ‘surveillance of Americans and fully autonomous weapons’

Now, Anthropic is trying to come to a compromise, a bizarre turn of events as the U.S. Central Command apparently used Anthropic’s Claude AI tool as part of a new offensive.

So what does it mean when Big Tech goes to war?

Since 2023, the AI industry has shifted to openly providing services for military applications and retracting their commitments not to involve their users data into war. Silicon Valley also began its shift to patriotism by signing new contracts as some senior staff signed up to serve. 

Then, last year, Microsoft and Google got into trouble with its Israel Ministry of Defence contract, and the way governments and Big Tech companies rely on each other to wage war (and crimes) was brought to greater attention.

Did these technology companies really imagine these relationships would go smoothly? Maybe they thought these services would only be about logistics and strategy, and they could be both patriotic in servicing the US government and remain neutral in providing IT support for other governments. 

Or as OpenAI indicates, they thought their systems could be built to prevent against additional and problematic use. Or maybe, like the cloud providers, they hoped that by not really knowing how their services were used, they could avoid scrutiny for, say, war crimes.

As governments use AI for war, they need these firms’ services. We need to know more about their relationships - and this is that we're asking:

1. Where does the data that feeds into these new war machines come from?

At the heart of the conflict between the AI firms and the U.S. Department of Defence is the provenance of data: the government has intelligence data on people, including U.S. persons, that they wanted to analyse in bulk using AI, Anthropic has reportedly claimed.

Although in theory the government is regulated in how it can perform surveillance of U.S. persons, it can purchase data on U.S. persons from data brokers. And so can other governments.

As Anthropic states: “Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.”

It is indeed a threat to all people when governments and companies are able to gain access to bulk personal data on people – citizens or not. Rather than have secret contractual negotiations around the collection and use of this data, or dream that somehow an AI can detect this legal problem and prevent processing, we must regulate this accumulation of data by data brokers and their onwards sale. War machines should not indiscriminately hold civilian data.

2. Is domestic and foreign mass surveillance now an acceptable business service?

Civilian data from civilian sources ends up in war machines, and it’s not limited to the U.S. government. The issue had been studied by the Biden Administration’s Office of the Director of National Intelligence and led to an Executive Order on Americans’ bulk personal data transfers to ‘countries of concern’. The UK Government has started a consultation on this very issue as well.

Before, the limitation government surveillance powers on this came in the form of rules regulating direct surveillance and the processing capacity of governments to make use of this data. Now, AI firms and cloud providers make mass surveillance both possible and real. It’s through their contracts with governments that vast data processing is at all possible.

Mass surveillance is wrong. It enables the potential for unchecked state power and control over individuals, obstructs the separation of powers, and affects all our rights. 

As the UN human rights chief has said, “while some States claim that such indiscriminate mass surveillance is necessary to protect national security, this practice is not permissible under international human rights law, as an individualized necessity and proportionality analysis would not be possible in the context of such measures”. We can’t let this corporate-government struggle over corporate ethical promises distract us from the very real threats of mass surveillance and how it impacts our lives.

Systems that enable vast data collection and processing must comply with existing human rights standards and must be made safe. Surveillance regulations urgently need updating to comply with existing standards for this era where governments are pre-emptively collecting data on people, domestic and foreign citizens, even in the gaps between conflict. 

Governments’ must stop using ‘national security law’ and related practices to shield them from this necessary and essential scrutiny around their mass surveillance practices that puts people across the world at greater risk.

3. When does a company's ethical commitment to its users end, and its national loyalty begin?

This question is particularly sticky, and intentionally so. First, so long as companies claim that they have ethical principles and will abide by human rights obligations, it’s highly desirable that we know what kind of contracts they have with governments and militaries.

If war departments across the world are able to collect bulk data on people, foreign and domestic, and use Big Tech firms to process this data, at what point do the Big Tech nationalistic concerns for ‘American people’ become impossible to live up to? 

When Microsoft’s services were used by the Israeli Ministry of Defence to process intercept data, did Microsoft’s services analyse whether that included any U.S. persons? This was never addressed, and instead Microsoft severed components of its contract on ethical grounds. (Please don’t get us wrong: we think all people’s data should be equally protected. All we are trying to show is the absurdity of the argument.)

The preoccupation with protecting American persons only in this era is ridiculous and irresponsible. But it’s because these firms are American that they feel this pressure. They should also be accountable for protecting everyone’s privacy. 

These companies are consumer-facing companies, and if they had privacy policies that stated they would not protect the privacy of non-American consumers, that would be considered a ridiculous and risky business decision. 

Then why is it permissible for them to endanger non-Americans from advanced surveillance that could even result in their targeting by autonomous weapons?

As other governments accumulate data on people, including US persons, these now-patriotic Big Tech firms would have to prevent this processing in their contracts or would promise to design their AI to prevent this from happening. We expect these firms to respect all people's privacy in every government contract they have. To verify this, all contracts needs to be clear and transparent particularly when the stakes are so high.

4. If all our lives are fair game, shouldn’t we all know it?

In out fight against mass surveillance laws around the world, we run up against the most powerful arms of governments using many tools, including the law, to shield themselves from scrutiny. We make progress, but it’s an intense uphill battle.

Because companies know more about these systems than government officials, there was some discussion that safeguards could be built into OpenAI’s technical solutions or Palantir’s implementation to prevent such abuses. This was rejected by Anthropic’s CEO as "safety theatre": 

"The basic issue is that whether a model is conducting applications like mass surveillance or fully autonomous weapons depends substantially on wider context: a model doesn’t “know” if there’s a human in the loop in the broad situation it is in (for autonomous weapons), and doesn’t know the provenance of the data is it analyzing (so doesn’t know if this is US domestic data vs foreign, doesn’t know if it’s enterprise data given by customers with consent or data bought in sketchier ways, etc)"

Investors, customers, and people everywhere deserve to know how these companies are selling, building, and deploying their technologies - including when that technology is being used against people. 

This is important as our funds (and our data) contribute to their development contribute to their development. Without these constraints, we suspect that industry will continue to play politics to compete with each other for dominance - particularly as these firms build general purpose tools like cloud compute and generative models.

Unless they can put guardrails into the contracts with governments - and we know they will struggle - then maybe they should not be working with them at all.