The Data Arms Race Is No Excuse for Abandoning Privacy

The Data Arms Race Is No Excuse for Abandoning Privacy

This piece originally appeared here.

Creative Commons Photo Credit: Source

Tech competition is being used to push a dangerous corporate agenda.

High-tech industries have become the new battlefield as the United States and China clash over tariffs and trade deficits. It’s a new truism that the two countries are locked in a race for dominance in artificial intelligence and that data could drive the outcome.

In this purported race for technological high ground, the argument often goes, China has an advantage because AI applications depend on large data sets—the new oil of today’s industrial revolution. China’s supposed advantage in data is often attributed to the country’s large internet user base, a perception that Chinese people care less about privacy, and the view that privacy protections are weak in China—not just against government surveillance but for company practices as well.

The implication is that U.S. competitors are comparatively limited in their access to data and hampered by privacy regulations, so much so that they lurch frustrated while Chinese tech giants fly free. In this story, pesky ideals of personal privacy stand against U.S. leadership in the future world order.

It’s worth asking who is behind this narrative, and whose interests it serves.

Some U.S. companies have used the specter of an AI race to head off user protections they see as burdensome. Facebook CEO Mark Zuckerberg, for example, has portrayed a trade-off between a need for “special consent for sensitive features like facial recognition” and the risk that “we’re going to fall behind Chinese competitors and others around the world who have different regimes for different, new features like that.”

Investors have also advanced the idea. “AI is run on data as fuel, and China has so much more data than any other country,” venture capitalist and computer scientist Kai-Fu Lee has put it, adding that “Chinese users are willing to trade their personal privacy data for convenience or safety. It’s not an explicit process, but it’s a cultural element.” And at a recent Asian investment conference in Hong Kong, a Credit Suisse executive proclaimed, “What will make China be big in AI and big data is: China has no serious law protecting data privacy.”

Some in the U.S. policymaking community have also internalized this narrative. A recent Center for Strategic and International Studies report acknowledged “legitimate concerns around privacy and consumer protection” while calling for a “strategy to combat protectionism, data localization, and privacy policies that harm [U.S.] global tech companies.”

Notably missing among the proponents of this narrative are the people whose data often fuels AI technologies. Users are the biggest potential losers in this race, with their rights to privacy and personal data protection held up as a challenge to innovation.

The implied solution to this challenge is at best to limit user protections for the sake of national competitiveness, and at worst a race to the bottom in unlocking user data for exploitation.

This narrative is not just unprincipled; it’s based on a faulty understanding of the technology.

And privacy-protective practices, far from a hindrance, can be a source of international advantage for U.S. competitors—or for anyone else—as people worldwide demand better protections.

Some Chinese companies, especially Tencent and Alibaba, do have remarkably broad and deep data stores on Chinese Internet users’ social and economic habits. Tencent’s WeChat platform, which sits at the crossroads of chat, social media, digital finance, and virtually every other type of online service, produces data conferring certain advantages, especially in delivering AI-driven services to Chinese users.

But large data sets are not a cure-all for AI developers. Research indicates that some applications experience rapidly diminishing returns in efficiency as large datasets become larger, and noisy or badly labeled data are of limited utility. Nor will data alone, without theoretical innovations and advanced hardware, drive breakthroughs. Still, even where Chinese companies have data sets uniquely suited to a particular AI application, they do not in fact have unrestricted freedom to collect and use people’s data.

One reason is user resistance. Over the last few years, and especially over the past nine months, Chinese internet users have repeatedly spoken out about perceived privacy abuses or misdeeds by Chinese companies. Search giant Baidu met loud criticism after its CEO said “Chinese people are more open, or not so sensitive, about the privacy issue,” and a consumer protection group in Jiangsu province sued the company after two of its mobile apps—a browser and a search engine—allegedly accessed user data without proper authorization.

These public controversies undercut the popular misconception, advanced by Chinese and foreign observers alike, that Chinese people are somehow culturally inclined to devalue privacy. It should also be clear that many Chinese are as sensitive to power dynamics between companies, users, and the government. When Li Shufu, chairman of the auto company Geely, suggested that Tencent’s chairman was reading users’ WeChat messages, Tencent issued a swift denial, but this didn’t satisfy skeptical online commenters.

Chinese authorities have also rolled out new regulations in recent months that limit corporate data use. For instance, China’s 2017 cybersecurity law outlines consent and processing requirements for personal information, and cyberspace regulators have developed a standard that officials have already used to publicly flog companies that don’t follow its provisions. When Alibaba affiliate Ant Financial was called out and fined in January for pre-checking a box opting users in to the company’s Sesame Credit scoring service, authorities cited the standard.

Chinese privacy protections are far from sufficient. The present upwelling of public discourse and regulatory activity on personal data protection focuses on companies and pointedly refrains from challenging the government’s ability to access people’s data if national or public security is invoked. The Chinese government’s abuse of surveillance powers is well-documented, and those powers are beginning to integrate AI technologies—helping, for example, to fuel successful facial-recognition companies. There is still huge uncertainty about the extent to which consumer services may now or in the future integrate with government surveillance systems, and there is little reason to believe these company-focused privacy protection measures can limit government intrusions.

Nevertheless, we cannot ignore that Chinese users have expressed increasing concerns about corporate data exploitation and that companies operating in China are subject to restrictions on their collection and use of personal data for business purposes.

China is not alone. People around the world are demanding greater responsibility from companies that profit off their data. Today, more than 100 countries have enacted comprehensive data protection legislation, and the European Union’s General Data Protection Regulation, effective since May, has set a prominent example.

The United States, meanwhile, continues to fall behind. U.S. privacy protections remain a gap-filled patchwork. American tech companies in large part get to set their own rules for how to handle user data domestically and have demonstrated, to be charitable, diverse levels of regard for personal data. At the same time, when U.S. tech companies operate globally and process the personal data of people outside the United States, they are increasingly subject to the data protection frameworks of other jurisdictions.

Developers, including in the United States and China, are meanwhile hard at work on approaches such as “differential privacy” and “federated learning” to provide data-driven applications while protecting privacy and security. Far from simply comparing data sets, the most competitive future developers and marketers of AI-driven technologies may be those who provide transformative products while protecting users’ rights to privacy and data protection.

U.S. regulators and companies should embrace this challenge. As global consciousness of data privacy risks rises, companies and countries that credibly protect users from exploitation will enjoy an edge in the AI race.

Graham Webster is the coordinating editor of DigiChina at New America and a senior fellow at Yale Law School’s Paul Tsai China Center.

Scarlet Kim is a legal officer at Privacy International.