Studying under Surveillance: the securitisation of learning

EdTech systems are often less about teaching than they are monitoring. Here, we explore the increasing web of surveillance children are experiencing in schools.

Long Read
Pupils with barcodes for faces sit in a classroom

Increasingly, EdTech systems are less about teaching than about monitoring, security and ‘safety’ – although those aims are often mixed with wider educational claims.

For instance, one company offering “high quality surveillance systems and CCTV for schools including sophisticated infra-red cameras which record in the darkest areas” claimed that these both deter “bad or antisocial behaviour from pupils, parents and visitors” and improve the concentration, productivity and attainment of the students.

These two claims would seem to clash, in our view, as studies have shown that surveillance, far from supporting education, can undermine students’ learning. And yet, this is far from the only surveillance tool that claims to improve behaviour, and outcomes, and more.

Tools to manage children - claiming to convert the chaotic business of growing up into an orderly, containable process - that actually surveil what they do physically, digitally, and emotionally have exploded. Here are some of the tools we have found that are being used or tested in schools:

Tools for managing cheating and institutional risk

AI-based technology has been used in the invigilation (US: proctoring) of exams, to watch learners taking an exam and determine whether they cheat. In online exams, this can be done by the mandatory use of software that demands access to the student’s computer and camera, and that claims to then be able to “detect whenever different software is opened, or even if there’s another person in the room”. The detrimental effects of this invasive technology on students are widely documented including its discriminatory harms based on skin colour, on neurodivergence and other forms of disability and infringements of privacy, agency, and human dignity.

Trust in these kinds of e-proctoring systems is already under considerable pressure. Indeed this kind of proctoring has already faced, and started losing, legal challenges. For example, in a preliminary ruling, the French administrative court of Montreuil suspended the use of algorithmic e-proctoring software ‘TestWe’ after students at the Institute of Remote Learning of the University of Paris 8 brought a legal case, assisted by activist organisation La Quadrature du Net. The plaintiffs argued that the software failed to comply with European Union’s General Data Protection Regulation (GDPR) because the software failed to comply with data minimisation standards and its visual and audio surveillance was disproportionate to the intended purpose. A final ruling is expected to follow, but the initial decision confirmed the system was likely to be illegal, and suspended it’s use.

Similarly, a US Federal Court ruling - called Ogletree v Cleveland State University - found that the room scan required by the e-proctoring service used by Cleveland State University violated the US Constitution’s fourth amendment, which protects against “unreasonable searches and seizures”.

Advocacy

The expansion of facial recognition in educational spaces raises serious human rights concerns, urging states to ban the technology and implementers to stop using it.

Tools for managing ‘personalised’ risk

Intrusive surveillance is also increasingly framed around student “wellness”, “safeguarding” and protection from predicted risks. However, the companies offering the tools rarely define these aims consistently, if at all. Often they claim in broad terms to focus on child “protection” and increasingly, on ensuring students are all “attentive” in class.

In fact, these systems result in the securitisation of education both in the physical world – by means of CCTV, biometric access systems, using facial recognition technology (FRT) in particular, physical location tracking by means of RFID chips – and in the digital environment, through Internet use tracking etc. (For more examples see PI’s EdTech Surveillance Tracker).

Monitoring companies make big claims. One company which is popular in the US - Bark - claims “monitoring can help save students lives”.

Another - GoGuardian - similarly claims their monitoring software can “prevent students from physical harm”, after all “When it comes to student mental health, every second counts.”

 

A leading UK company, Netsweeper claims their software “secures devices on and off campus” allowing schools to “extend remote web filtering and policy enforcement to every device and operating system, anywhere.”

 

Yet a survey by the Center for Democracy & Technology, which looked at the adoption of student monitoring softwares across the US starting in Kindergarten (around 5 years old) found that monitoring is used for discipline more often than student safety; is often not limited to school hours; raised the risk of increased interactions between students and law enforcement; risked detrimental effects to student’s mental health; and increased risks for LGBTQ+ students, students from low-income families, Black studetns, and Hispanic students.

An exemplar of this trend is Gaggle, an American tracking company who claim to be “the most comprehensive student safety solution on the market” able to “helps K-12 districts see the early warning signs so they can take action to protect students from harming themselves or others—before it’s too late.”

Yet, in 2022 American Senators Elizabeth Warren and Ed Markey called for “urgent federal action to protect students” from platforms including Gaggle - accusing them of “urveilling students inappropriately, compounding racial disparities in school discipline, and draining resources from more effective student supports.”

Moreover, an investigation by the website The74 found the first line of review for even extremely personal and private content - including diary entries and nude selfies - were being passed on to poorly paid and poorly trained contractors to review.

One of those contractors that The74 spoke to said reading students conversation was an instructive experience - demonstrating the chilling effects of surviellance, as students warned each other not to talk in certain ways as they were being watched.

Bodily analysis

Added to the personal data that students are aware they provide to systems, can be the use of AI processing data in ways students do not see in attention, mood, and emotion recognition. In the EU AI Act an ‘emotion recognition system’ means an AI system aimed at ‘identifying or inferring emotions or intentions of natural persons on the basis of their biometric data’. These systems have been banned for use in EU classrooms, unless strictly for medical or safety resons. This is because the EU has recognised that these systems are seriously invasive and may lead to discriminatory outcomes.

EdTech systems are already in use, claiming to track eye movements to detect student’s not paying attention. Others claim to be able to “identify emotions and classify learner involvement and interest in the topic [being taught]” by detection of eyes and head movement, with the results being “plotted as feedback to the instructor to improve learner experience.” And others market brain-scanning headbands or sell neuro-technology for STEM promotion in the classroom, or produce classroom sensors that claim to identify mood and emotions using “pose estimation” from faces.

Other AI-based tools go even further, analysing a large range of factors - not all of them physical or visible - to identify and track the emotional health of their students.

Moreover, some claim to be able to use the pupil data they process to not only analyse mental health data, but to make recommendations for predictive interventions based on it.

“Imagine that, this term, you could quickly identify which of the 1,000 students had hidden mental health risks, even if they were not visible.” STEER Education Ltd website

 

One AI company, operating in UK schools with thousands of children - STEER Education Ltd - claims to be an “evidence-based tool to measure, track and improve how each young person self-regulates four factors which are fundamental to wellbeing and good mental health…[identifying] hidden social-emotional risks that might otherwise go undetected, equipping teachers to proactively target their support, and measure impact.”

 

Sounds promising. But, in 2019 parents contacted Defend Digital Me (DDM) as they were concerned that the company’s algorithms could influence their child’s mental health or make some sort of assessment about it without consent and without parents able to understand it fully, and that the company retains this data and their children’s profiles.

The data collected, according to a Data Protection Impact Assessment completed by STEER and provided to DDM by Acamdemies Enterpise Trust, could include highly sensitive information such as “recently bereaved”, with a welfare plan, “heavily committed”, “gifted”, and “passport nationality”.

Parents felt that there was no transparent way that children, staff, or parents can independently validate any company claims, and that it is excessive for a school to “curate a unique 10 year record of a child’s social-emotional development, monitoring their wellbeing through adolescence.”

In subsequent communications, the Operations Director at STEER told Defend Digital Me:

“We curate students’ tracking data over several years in order to pass it to them as an asset they own at the age of 16. Indeed, this is the very core of STEER’s mission to empower young people. Psychological literature has shown that resilience in later life is strongly dependent on understanding your own personal psychological developmental journey.”

In February 2020, Defend Digital Me was told the UK’s data protection authority, the Office of the Information Commissioner (ICO) had:

"made enquiries with STEER, and…found that it is likely that STEER and the schools using their services are in contravention of the UK General Data Protection Regulation (UK GDPR) or [UK] Data Protection Act 2018 (DPA18). I can confirm that this finding means that I partially uphold the concerns you raised around Article 5(1)(a), Article 5(1)(b), Article 9, and Article 35 of the UK GDPR.”

However, there was no public reporting of any enforcement action by ICO.

Our Concerns

As you can see, the creation of scores (such as rating the likelihood a student is cheating), profiles (like those created to monitor student’s mental health), and the drawing of inferences (such as inferring a student’s interest from their facial expressions) is routine in EdTech, and raises two major concerns.

 

News & Analysis

Facial recognition technology harms children’s education, and should be banned from educational institutions, UN Expert says.

 

First of all, staff in educational settings may come to rely on such automated scores, profiles and inferences, and uncritically accept the conclusions and follow the predictions and recommendations of the systems, under so-called ‘automation bias’. This is a phenomena that has been studied since the 1990s in fields such as healthcare, aviation, and the military. This may lead to interventions based on deeply flawed technologies that have the potentially to seriously damage a child’s education, and as such their future.

Moreover, in the UK (and some other countries) school data are accessed by many bodies outside of the direct school environment, such as the by local authorities and the national Department for Education. This is partly for statistical purposes but also to underpin other decisions, such as those involved in the UK’s Prevent system, which is intended to prevent young people from becoming “extremist”.

These functions entail processing of a child’s personal data by dozens of companies in one day, every day, across their entire education. Reliance on the outputs of EdTech systems therefore be widespread and there is a context collapse from their original point of collection. Individual teachers have context that comes from interacting with their students outside of these data collection systems, that these third parties do not. And yet, these third parties may make equally significant decisions for those students on the basis of those data.

Secondly, given the opacity of the technologies – and the tendency of EdTech companies to refuse to share details of their software and algorithms with schools or parents (or anyone), on the basis that they constitute “commercial secrets” – schools and teachers rarely have the information or skills to check the accuracy of the predictions or the appropriateness of the recommendations on which they are being asked to rely.

This ‘one-way mirror’ effect, constant monitoring of learners by companies but no transparency of their business models, and inadequate oversight of the procurement process, accountability for rights or redress when things go wrong, shares much in common with issues identified by Privacy International in work on public-private partnerships. It is a disturbing trend through out all of the forms of EdTech mentioned in this article.

One key outcome of this opacity for learners, families and indeed educational settings themselves, is a that they cannot find out whether they suffer from excessive false positives or false negatives or are tainted by bias, leading to discrimination against certain groups of students.

Moreover, the intensive surveillance we describe here may actively damage children’s education. One paper from the Equality Project suggests that:

the “continual policing” that students experience…makes it more difficult to develop the relationships of trust that are at the heart of education.

And that, surveillance

“makes it more difficult for students to get help when they need it, even from a teacher they trust, because they feel that they will be judged on the basis of their digital footprint instead of their own experiences and perceptions of the event.”

Long Read

This piece highlights concerns over the increasing use of data-intensive technologies in educational spaces (EdTech) and examines - among other things - how their unchecked implementation can jeopardize students' rights and education through potential privacy violations, discrimination, and the lack of student input in the adoption of these technologies.

Rather than enhancing children’s education or protecting them, schools may be substituting safety for order and learning for surveillance, accompanied by an opacity that makes it difficult to disentagle or challenge.

This is refocusing on order and control above all is one of the – perhaps the – most pernicious aspect of the latest forms of EdTech.

What now?

If you’ve read this article and want to know what you can do next - remember, we are always looking for new examples of innapropriate surveillance technologies being deployed in schools.

Find our tracker

But tracking will not be enough. Remember if you attend a school or university that is using Education Technologies and you’re based in the UK, EU, or other jurisdictions with a data protection framework - you can find out what information they are collecting about you using a Subject Access Request. Outside of these areas - your local data protection law may have a similar right.

If you’re a teacher and want to learn more about how you can discuss privacy with your students – you can find our resource for educators.

Article published with thanks to Defend Digital Me and Douwe Korff.

Our campaign
Learn more
Glossary