From chatbots to adbots: sharing your thoughts with advertisers

AI chatbots are now in everyday use both across different industries and recreationally. In this blog, we consider the growing possibility that these AI tools are the next home of the multi-billion-dollar advertising industry. We look at examples from some AI companies that have already announced demos of sponsored ads in their AI chatbot tools, and discuss why this is dangerous territory for users.

Long Read
Mock-up AI chatbot conversation between a human user and an AI chatbot over a gradient blue/purple background with white text boxes and a red text box from the AI bot that says 'Your Ad Here.'

Visual mock-up of ad placement in an AI chatbot conversation with a human user.

Introduction

In early October this year, Google announced its AI Overviews would now have ads. AI companies have been exploring ways to monetise their AI tools to compensate for their eye watering costs, and advertising seems to be a part of many of these plans. Microsoft have even rolled out an entire Advertising API for its AI chat tools.

As AI becomes a focal point of consumer tech, the next host of the AdTech expansion regime could well be the most popular of these AI tools: AI chatbots. AdTech is a multi-billion-dollar industry that comprises all the tools and services connecting advertisers with target audiences and publishers, and it has grown into a profit-driven ecosystem powered by the lucrative currency of user personal data for personalising ads.

Companies like Microsoft have already begun deploying ads into their AI chatbot tools, and earlier this year OpenAI hired former Google Search Ads lead. AI chatbots are a potentially lucrative arena for targeted advertising given that chatlogs can include a trove of personal information a user provides in the process of the conversation that might prove valuable for advertisers looking to further personalise targeted ads. This can include highly personal data, for example if a user is asking questions about their health. This raises data privacy concerns around how user input data (also called UGC, or user-generated content) is stored and repurposed by the chatbot developers for training and further fine tuning.

Advocacy

PI responded to the ICO consultation on the legality of web scraping by AI developers when producing generative AI models such as LLMs. Developers are known to scrape enormous amounts of data from the web in order to train their models on different types of human-generated content. But data collection by AI web-scrapers can be indiscriminate and the outputs of generative AI models can be unpredictable and potentially harmful.

Advertising has been the leading business model of the Internet to date. While OpenAI has announced that it is not currently interested in embedding ads into ChatGPT's conversations, we should take that with a pinch of salt. Sponsored outputs have been implemented into search engine results that were once ad-free, and platforms like Facebook and Instagram began as ad-free for years before eventually turning to the lucrative ads industry.

The question for now is: what might chatbot ads look like, and what does this mean for our data privacy?

What sponsored outputs in AI chatbot conversations could look like

Perhaps the most straightforward way for AI chatbots to incorporate ads is by embedding sponsored results into chatbot conversations, either as links or as text responses. Here's what we know so far about what companies are planning for advertising in AI chatbots and what that might look like.

Microsoft

Microsoft has been testing out a number of different ways of integrating sponsored content into its AI products and has long included Search ads in Copilot (formerly knows as Bing Chat). The company is now looking to roll out a Copilot chatbot feature called 'ad voice' that will be based on the whole session conversation (rather than just the most recent prompt). The company has hinted at a gradual roll-out over time, so it may be some time before we begin to see signs of this ourselves. 

Screenshot of Microsoft's Copilot ads demo. Source: https://about.ads.microsoft.com/en/blog/post/october-2024/transforming-audience-engagement-with-generative-ai.

Microsoft has said that part of the push to expand ads into Bing AI Chat/Copilot in this way is due to user data showing a surge in people using it for search queries. Lacing ads in there can be seen as part of a wider push to capture ad market shares, including in the Windows 11 start menu.

Microsoft is also exploring a number of different formats of advertising through its ads for chat API that is meant to "integrate advertising into the chat flow in a way that's helpful and natural". It is designed to allow other companies to explore a range of different formats for ads to appear and can even be used to serve ads on non-Microsoft chat platforms.

One of these advertising formats it has rolled out for beta testing is Compare & Decide Ads, a "new ad format designed specifically for Chat". This feature pulls together all the relevant information from sponsors to display a table like the following in response to a user's specific query about comparing different products:

Screenshot of Microsoft's Compare & Decide Ads. Source: https://about.ads.microsoft.com/en/blog/post/september-2023/transforming-search-and-advertising-with-generative-ai

Chat Ads API also offers tools to businesses to customise their own Chat environment with ads. The API has already gained partnerships with big publishers like Axel Springer to customise their own Microsoft Chat (Azure OpenAI) experiences with ads, allowing them to "leverage Microsoft Advertising's Chat Ads API for generative AI monetisation".

Amazon

Amazon has also been throwing its hat into the chatbot advertising ring with its latest prototype for its Rufus shopping chatbot that aims to enable the chatbot to 'proactively recommend products based on what they know of your habits and interests'. Currently the prototype seems to be explored only within the confines of the chatbot conversation, but a VP who works on conversational AI shopping at the company has hinted that Amazon is thinking about how advertising can be incorporated into this recommendation algorithm - and based on what we've seen above with Microsoft's Chat Ads API for shopping, for instance, it's pretty safe to say that this will likely look like sponsored products being recommended in the chatbot conversation.

Snapchat

Earlier this year, social media giant Snapchat announced its partnership with Microsoft to deploy a Sponsored Links programme. Snapchat's My AI, powered by OpenAI's GPT, has seen users engaging with the chatbot to receive 'real-world recommendations'. Relevant sponsored content will be embedded into My AI to feel 'natural to the conversation flow'. While the programme is still in an experimental stage, we expect it to look something like this: 

Screenshot of Snapchat's My AI demo. Source: https://about.ads.microsoft.com/en/blog/post/september-2023/microsoft-advertising-partners-with-snap-to-power-sponsored-links-within-snapchats-my-ai-chatbot.

Perplexity

Perplexity AI is also looking to roll out a programme called 'Perplexity Ads'. Like OpenAI, Perplexity AI has denied any interest in direct on-platform advertising, but it instead looks to roll out a sponsored follow-up questions programme ('Perplexity Ads'). A spokesperson for Perplexity has said this strategy is based on the fact that 40% of all Perplexity user queries have a follow-up question, thus rendering this method a 'very fertile ground for a native advertising unit'.

Adzedek

New entrants and startups may also seek to fill this space. One such company is Adzedek, an advertising integration for AI chatbots that aims to integrate clients' sponsored ads into GPT Store chatbots. Paying clients' ads would, through Adzedek, be matched with and fed into relevant GPTs on the GPTStore based on 'contextual' information arising from the chatbot conversation. Here's what this might look like, according to a demo from Adzedek:

Screenshot of Adzedek's demo of integrating ads into GPTs. Source: https://www.axios.com/2024/03/01/ads-chatbots-search-advertising-sponsored-ai.

As far as we know, Adzedek's services have not yet been formally deployed, and this kind of service (embedding sponsored ads) is not yet permitted in the GPTStore. But it may only be a matter of time before the company turns to this kind of monetisation and we start to see integrations like the above in our chatbots.

Why this is dangerous territory

Evidently we are already seeing a likely ads-laced future in the AI chatbot industry. This is dangerous territory, as online advertising depends on an approach to data extraction and maximisation that undermines our privacy and our autonomy. We do not want to see that repeated in AI conversational chatbots, especially when there are new, deeper levels of personalisation to targeted advertising when facilitated by AI chatbots.

Long Read

A study by Privacy International reveals how popular websites about depression in France, Germany and the UK share user data with advertisers, data brokers and large tech companies, while some depression test websites leak answers and test results with third parties. The findings raise serious concerns about compliance with European data protection and privacy laws.

Personalised ads, more intimate than ever before

When interacting with chatbots, users are generating personalised information about themselves in more unique, intimate detail than ever before. AI chatbots create a new, unparalleled environment where users provide deeply personal information beyond what they might provide in traditional environments like a search engine.

In AI chatbot conversations, users might casually reveal deeply intimate information as detailed as the names of their children for gift ideas, tips for going through their divorce, that they are currently feeling very depressed, etc., which chatbots are equipped to: 
1) store all this variety of information in one chatlog, and 
2) facilitate personalised ads targeted to these deeply personal details.

Chatbot responses are even more exclusively and personally tailored to the user than search engine queries. They can even be based on contextual (remembered) information about the user from previous conversations. Individuals may also be more willing to reveal personal information in a conversational environment that can retain information than in a generic search engine query.

In fact, Microsoft capitalises on this data collection for its chatbot's sponsored outputs, emphasising that its Chat Ads API leverages 'the power of generative AI' to create 'more relevant, visually rich, and immersive content that changes dynamically to match the advertiser's goal and consumer intent.' Its 'ad voice' is an example of how Microsoft is thinking about "adapting more ads to the specific context of each user and each moment."

As a result, the sponsored results chatbot outputs can be far more invasive because they can be based on far more intimate information collected over time about the user and how they behave and react. AI chatbots have the potential to amplify the profit-driven harms of data exploitation by AdTech companies, as these chatbots have created entirely new avenues for the collection of more detailed personal data (e.g., a user explaining their health symptoms with detailed context that they might not do in a search engine query).

AI chatbots can also mean streamlining the collection of different types of personal data into one service, rather than the traditional AdTech practices of collecting user information across the multiple websites they're browsing. In one single chatlog, a user could very well have provided information about their health, their work, and their favourite movies that might feed into ads they'll see later in the chat or, if they are a frequent user of the service and have an account, even in chats days or weeks later.

Proper labeling and transparency disclosures

Another problematic feature of advertising integration into AI chatbots is whether and how companies disclose to users that they are relying on user data to generate sponsored content. Cookie banners are hardly a popular and successful mechanism for notifying users and gaining their consent about online tracking. The next steps for this must improve. 

So far, we have seen sponsored outputs being labeled (under a header 'Sponsored' in its own separate, distinguishable box underneath non-sponsored results). But these simplistic ways of labeling don't explain where the ads came from and how targeted they are: what user and/or training data are they based on?

Existing advertising disclosures and rules for online platforms might help to mitigate this new frontier landscape of ads seeping into AI chatbots, but that remains to be seen. What's clear is that people must be fully informed and have control about how their data is used, especially in relation to potentially manipulative targeted advertising. 

EU Digital Services Act (DSA)

In the EU, the Digital Services Act requires online platforms presenting advertising to provide meaningful information and logic explanations for the main parameters used to choose adverts. The EU declares that 'online platforms are obliged to disclose the advantages given to sponsored products in the ranking and to explain to you the parameters upon which the ranking is based.'

UK Competition & Markets Authority (CMA)

The UK's Competition & Markets Authority (CMA) has previously commented on 'hidden ads', pushing for the requirement of clear 'ad' labels to combat hidden ads infused into social media content, such as sponsored products advertised in a less traditional, more casual setting by influencers or content creators.

U.S. Federal Trade Commission (FTC)

Regulatory frameworks from the U.S. Federal Trade Commission (FTC) specifically concern search engines. The FTC requires that search engine companies ensure that any paid ranking search results are distinguished from non-paid results with clear and conspicuous disclosures, and it has even identified techniques for making these disclosures distinguishable to users, such as through 'visual cues' or 'text labels'.

GDPR Lawful Basis for Processing

The GDPR and other data protection laws specify that at least one lawful basis must be met for data processing to take place: Consent, contract, legal obligation, vital interests, public task or legitimate interests. However, not all of these are available for online targeted ads

Conclusion

The AdTech industry is at a crossroads right now, and AI chatbots may end up as a growth area for online ads. From first-party integrations of advertising into their AI chatbots to third-party middle-men startups emerging to service these integrations, we are seeing the beginning of perhaps a future we are all too familiar with in this surveillance economy.

It’s not yet clear how existing business practices and regulatory frameworks might shape this area, especially with the complex question of granular datapoints generated through users themselves. GDPR and data protection law - regulators and courts must be swift and agile in updating their understanding of what the law requires here. 

Advocacy

PI responded to the ICO consultation on engineering individual rights into generative AI models such as LLMs. Our overall assessment is that the major generative AI models are unable to uphold individuals’ rights under the UK GDPR. New technologies designed in a way that cannot uphold people’s rights cannot be permitted just for the sake of innovation.

We're already seeing a lot of noise around AI companies falling back on the advertising industry to monetise their 'free' for-profit products. It's not a matter of when advertising will be integrated into AI chatbots, but how it will look - and how the regulatory landscape should prepare. Technology can be exciting and empowering and AI chatbots are no exception, but business interests should never be a reason to weaken or infringe our rights. With products like AI chatbots that invite us to share intimate information through a familiar messaging interface, developers and deployers have a responsibility to ensure that the trust users put in their services will not be betrayed.

Glossary