Generative AI won't take over the world, surveillance capitalism already has

A year and a half after OpenAI first released ChatGPT, generative AI still makes the headlines and gathers attention and capital. But what is the future of this technology in an online economy dominated by surveillance capitalism? And how can we expect it to impact our online lives and privacy?

News & Analysis
Photo by Anthony Rosset on Unsplash

A photo of Time square with gigantic screens showing ads 

Is the AI hype fading? Consumer products with AI assistant are disappointing across the board, Tech CEOs are struggling to give examples of use cases to justify spending billions into Graphics Processing Units (GPUs) and models training. Meanwhile, data protection concerns are still a far cry from having been addressed.
Yet, the believers remain. OpenAI's presentation of ChatGPT was reminiscent of the movie Her (with Scarlett Johannsen's voice even being replicated a la the movie), Google managed to include AI in almost every announcement at their Google I/O 2024 and Apple joined the crowd with Apple Intelligence. Thus, the companies leading the AI charge continue to assert that this technology will be a revolution and change everything.
And so people are left with two very different narratives: promise or waste.
On one side is the promise: AI is on its way to take over our jobs and increase GDP massively. It will take over Hollywood and produce full length movies from a few prompts, increase our productivity tenfold and help us achieve UN Sustainable Developments Goals. Give it a few years and it will turn into omniscient agents that helps us do everything while tracing the way towards Artificial General Intelligence (AGI).
On the other side is waste: it's just another piece of tech that attracted insane amount of fundings without delivering results. It's just an iteration of things we've seen before (e.g. 'big data!', 'algorithms!') that cannot be trusted with anything because it hallucinates, and it comes with massive costs and environmental impacts that condemn it to disappear until a new scientific breakthrough in machine learning appears.

A screenshot of Google's AI Overview suggesting to eat at least one small rock a day

So you're left to choose: either you buy the hype and hope to be on the right side of the genAI revolution or you wait for it to die and hope it doesn't make everything worse.
But the future, as often, might turn out to be a muddy middle ground. With the amount of money poured into genAI by Venture Capitalists, Sovereign Funds, and Big Tech, it's hard to imagine it all faltering and disappearing without producing anything of value. You could argue that it's already happened with Blockchain or Virtual Reality (and you would not be wrong) but some very powerful niche use cases remain. In the case of genAI, there are already some use cases for people: teenagers have already found the technology to provide useful companionship, 90% of US-based developers have adopted AI coding assistants and its use in medical research is highly promising.
The problem arises with the fact that a decently useful product (as opposed to a revolutionary one) that has a really high delivery cost doesn't make for a good consumer-facing business model. Subscription might be an answer but, if the health of online News is any indication, it might not be the most solid option.
The industry could drop consumer-facing AI products and embrace B2B through API access models. But for consumer-facing giants like Google and Meta, this isn't really an option. Particularly when they're promising they won't train their models with businesses' data. Models might shrink in size along with computational cost but there will always remain costs, if only to find data that it can collect and train the models.
So what happens once the free trials end? Once a business model becomes necessary to cover those costs? What's the solution'?
It's going to be ads.
Advertising has been the business model of Big Tech companies offering services (think Google Search rather than the iPhone) for the last 10 years, and there are little reasons to think this will change with the next generation of technology. Facebook used to be ad-free, so was Google, so were the app stores, our Operating Systems. But ads made their way into every consumer-facing products, with the promise from these companies that advertisers would be able to target people with uncanny accuracy, always at the right time and for the right price. Consumers could continue enjoying these products as long as they agreed to be targeted with ads.

A recent offense by Microsoft: Windows 11 displaying "Recommended" apps in the start menu. Reported by The Verge
Youtube displaying an unskippable ad before a first aid video. Reported by The Daily Mail

The promise of a free and open internet was taken over by behavioural advertising. This model came with costs, including to our privacy. The requirements for behavioural advertising to happen was a web of surveillance, for our every online moves and action to be tracked, collected and consolidated into profiles aiming to describe what type of person we were and what we desired. Looking at this intelligence dossier on you is unsettling, whether you've looked at your Google history or, as we have done, exercised your right to access your data in Europe. This profiling is creepy, has harmed individuals and societies and cannot be justified by the need to show us ads.
Just as surveillance capitalism took over the web, it will take over genAI. Despite multiple challenges, political and legal, around the world, surveillance capitalism continues to be the dominant business model. It entrenched disrespect for users' personal data so deep into tech companies' ethos that the foundation of genAI itself was the exploitation of this data through scraping and processing to create datasets that would be used for training AI models.
Today, the promises of AI products to be an omnipresent assistant opens the door to a new wealth of data, bringing the threats levels to our privacy to new highs. Mental health websites sharing your data with unknown third parties were already scary enough, now we have to cope with the idea of a single piece of software with access to our all our information. This will be the price of access to key resources Big Tech have accumulated: the computing power, the expertise and scientific knowledge, the market position. Big Tech dominates those resources and ultimately decides who access them, competitors included. This means that we will need to trust those actors to act decently with the information AI accesses and generates, a trust they've broken so many times in the past.
GenAI doesn't need to take over the world, the companies that monopolise its development already have. And surveillance capitalism with it. Fortunately, we are still in the early stage and we all have a chance to intervene to do something about it. From regulation to legal action to individual act of protest, the fight is on for a future where technology serves us rather than the interest of big corporations.