Search
Content type: News & Analysis
We’ve been warning for a while now about the risks of AI Assistants. Are these assistants designed for us or to exploit us?The answer to that question hinges on whether the firms building these tools are considering security and privacy from the outset. The initial launches over the last couple of years were not promising.Now with OpenAI’s agent launch, users deserve to know whether these firms are considering these risks and designing their service for people in the real world. The OpenAI…
Content type: Long Read
“Hey [enter AI assistant name here], can you book me a table at the nearest good tapas restaurant next week, and invite everyone from the book club?” Billions of dollars are invested in companies to deliver on this. While this is a dream that their marketing departments want to sell, this is a potential nightmare in the making.Major tech companies have all announced flavours of such assistants: Amazon’s Alexa+, Google’s Gemini inspired by Project Astra, Microsoft’s Copilot AI companion and…
Content type: Advocacy
Generative AI models cannot rely on untested technology to uphold people's rightsThe development of generative AI has been dependent on secretive scraping and processing of publicly available data, including personal data. However, AI companies have to date had an unacceptably poor approach towards transparency and have sought to rely on unproven ways to fulfill people's rights, such as to access, rectify, and request deletion of their dataOur view is that the ICO should adopt a stronger…