PI response to ICO consultation on data subject rights and generative AI
PI responded to the ICO consultation on engineering individual rights into generative AI models such as LLMs. Our overall assessment is that the major generative AI models are unable to uphold individuals’ rights under the UK GDPR. New technologies designed in a way that cannot uphold people’s rights cannot be permitted just for the sake of innovation.
Generative AI models cannot rely on untested technology to uphold people's rights
The development of generative AI has been dependent on secretive scraping and processing of publicly available data, including personal data. However, AI companies have to date had an unacceptably poor approach towards transparency and have sought to rely on unproven ways to fulfill people's rights, such as to access, rectify, and request deletion of their data
Our view is that the ICO should adopt a stronger position than it has to date. Until companies developing generative AI offer adequate means for people to exercise their rights, they should not be allowed to distribute their models or offer services relying on those models.
In the submission, we provide further information on the following. Together they demonstrate how peoples’ rights are undermined by generative AI and indicate why a firm approach is justified:
- A high bar must be set for transparency, with significant improvement on past and current practice necessary;
- Input and output filters are inherently unreliable because they cannot exhaustively cover every use case. An illustrative example is the constant discovery of ways to “jailbreak” LLMs;
- Measures to protect personal data must be at least as strong as any measure devised to protect copyrighted materials (such as opt-outs and filters);
- Privacy by default and design should be implemented so as to not place the onus on individuals to take action to prevent invasive practices.
Download our full response to the consultation below.