Search
Content type: Examples
Almost half of all job seekers are using AI tools such as ChatGPT and Gemini to help them write CVs and cover letters and complete assessments, flooding employers and recruiters with applications in an already-tight market. Managers have said they can spot giveaways that applicants used AI, such as US-style grammar and bland, impersonal language. Those whose AI-padded applications perform best are those who have paid for ChatGPT - overwhelmingly from higher socio-economic backgrounds, male, and…
Content type: Examples
The UK's Department of Education intends to appoint a project team to test edtech against set criteria to choose the highest-quality and most useful products. Extra training will be offered to help teachers develop enhanced skills. Critics suggest it would be better to run a consultation first to work out what schools and teachers want.Link to article Publication: Schools WeekWriter: Lucas Cumiskey
Content type: Examples
The UK's new Labour government are giving AI models special access to the Department of Education's bank of resources in order to encourage technology companies to create better AI tools to reduce teachers' workloads. A competition for the best ideas will award an additional £1 million in development funds. Link to article Publication: GuardianWriter: Richard Adams
Content type: Examples
The Utah State Board of Education has approved a $3 million contract with Utah-based AEGIX Global that will let K-12 schools in the state apply for funding for AI gun detection software from ZeroEyes for up to four cameras per school. The software will work with the schools' existing camera systems, and notifies police when the detection of a firearm is verified at the ZeroEyes control centre. The legislature will consider additional funding if the early implementation is successful. The…
Content type: Examples
France has been testing AI tools with security cameras supplied by the French technology company Datakalab in the Paris Metro system and buses in Cannes to detect the percentage of passengers who are wearing face masks. The system does not store or disseminate images and is intended to help authorities anticipate future oubreaks.
https://www.theguardian.com/world/2020/jun/18/coronavirus-mass-surveillance-could-be-here-to-stay-tracking
Writer: Oliver Holmes, Justin McCurry, and Michael Safi…
Content type: Examples
A growing number of companies - for example, San Mateo start-up Camio and AI startup Actuate, which uses machine learning to identify objects and events in surveillance footage - are repositioning themselves as providers of AI software that can track workplace compliance with covid safety rules such as social distancing and wearing masks. Amazon developed its own social distancing tracking technology for internal use in its warehouses and other buildings, and is offering it as a free tool to…
Content type: Examples
After governments in many parts of the world began mandating wearing masks when out in public, researchers in China and the US published datasets of images of masked faces scraped from social media sites to use as training data for AI facial recognition models. Researchers from the startup Workaround, who published the COVID19 Mask image Dataset to Github in April 2020 claimed the images were not private because they were posted on Instagram and therefore permission from the posters was not…
Content type: Examples
Researchers are scraping social media posts for images of mask-covered faces to use to improve facial recognition algorithms. In April, researchers published to Github the COVID19 Mask Image Dataset, which contains more than 1,200 images taken from Instagram; in March, Wuhan researchers compiled the Real World Masked Face Dataset, a database of more than 5,000 photos of 525 people they found online. The researchers have justified the appropriation by saying images posted to Instagram are public…
Content type: Examples
Many of the steps suggested in a draft programme for China-style mass surveillance in the US are being promoted and implemented as part of the government’s response to the pandemic, perhaps due to the overlap of membership between the National Security Commission on Artificial Intelligence, the body that drafted the programme, and the advisory task forces charged with guiding the government’s plans to reopen the economy. The draft, obtained by EPIC in a FOIA request, is aimed at ensuring that…
Content type: Examples
The AI firm Faculty, which worked on the Vote Leave campaign, was given a £400,000 UK government contract to analyse social media data, utility bills, and credit ratings, as well as government data, to help in the fight against the coronavirus. This is at least the ninth contract awarded to Faculty since 2018, for a total of at least £1.6 million. No other firm was asked to bid on the contract, as normal public bodies’ requirements for competitive procurement have been waived in the interests…
Content type: Examples
Russia has set up a coronavirus information centre to to monitor social media for misinformation about the coronavirus and spot empty supermarket shelves using a combination of surveillance cameras and AI. The centre also has a database of contacts and places of work for 95% of those under mandatory quarantine after returning from countries where the virus is active. Sherbank, Russia's biggest bank, has agreed to pay for a free app that will provide free telemedicine consultations.
Source:…
Content type: Examples
The US Department of Homeland Security awarded a $113 million contract to General Dynamics to carry out the Visa Lifecycle Vetting Initiative (VLVI), a renamed version of the Extreme Vetting Initiative and part of a larger effort called the National Vetting Enterprise. In May 2018, public outrage led the DHS to back away from a machine learning system that would monitor immigrants continuously; however, the reason it gave was that the technology to automate vetting did not yet exist. These…
Content type: Examples
VeriPol, a system developed at the UK's Cardiff University, analyses the wording of victim statements in order to help police identify fake reports. By January 2019, VeriPol was in use by Spanish police, who said it helped them identify 64 false reports in one week and was successful in more than 80% of cases. The basic claim is that AI can find patterns that are common to false statements; among the giveaways experts say that false statements are likely to be shorter than genuine ones, focus…
Content type: Examples
In October 2018, the Singapore-based startup LenddoEFL was one of a group of microfinance startups aimed at the developing world that used non-traditional types of data such as behavioural traits and smartphone habits for credit scoring. Lenddo's algorithm uses numerous data points, including the number of words a person uses in email subject lines, the percentage of photos in a smartphone's library that were taken with a front-facing camera, and whether they regularly use financial apps on…
Content type: Examples
In November 2018, tests began of the €4.5 million iBorderCtrl project, which saw AI-powered lie detectors installed at airports in Hungary, Latvia, and Greece to question passengers travelling from outside the EU. The AI questioner was set to ask each passenger to confirm their name, age, and date of birth, and then query them about the purpose of their trip and who is paying for it. If the AI believes the person is lying, it is designed to change its tone of voice to become "more skeptical"…
Content type: Examples
In November 2018, worried American parents wishing to check out prospective babysitters and dissatisfied with criminal background checks began paying $24.99 for a scan from the online service Predictim, which claimed to use "advanced artificial intelligence" to offer an automated risk rating. Predictim based its scores in part on Facebook, Twitter, and Instagram posts - applicants were required to share broad access to their accounts - and offered no explanation of how it reached its risk…
Content type: Examples
In November 2018, researchers at Sweden's University of Lund, the US's Worcester Polytechnic Institute, and the UK's Oxford University announced that in August the US State Department had begun using a software program they had designed that uses AI to find the best match for a refugee's needs, including access to jobs, medical facilities, schools, and nearby migrants who speak the same language. Known as "Annie MOORE", refugees matched by the program were finding jobs within 90 days about a…
Content type: Examples
In 2017, US Immigration & Customs Enforcement (ICE) announced that it would seek to use artificial intelligence to automatically evaluate the probability of a prospective immigrant “becoming a positively contributing member of society.” In a letter to acting Department of Homeland Security Secretary Elaine Duke, a group of 54 concerned computer scientists, engineers, mathematicians, and researchers objected to ICE’s proposal and demanded that ICE abandon this approach because it would be…
Content type: Examples
Computer programs that perform risk assessments of crime suspects are increasingly common in American courtrooms, and are used at every stage of the criminal justice systems to determine who may be set free or granted parole, and the size of the bond they must pay. By 2016, the results of these assessments were given to judges during criminal sentencing and a sentencing reform bill was proposed in Congress to mandate the use of such assessments in federal prisons. In a study of the risk scores…
Content type: Examples
In 2012, London Royal Free, Barnet, and Chase Farm hospitals agreed to provide Google's DeepMind subsidiary with access to an estimated 1.6 million NHS patient records, including full names and medical histories. The company claimed the information, which would remain encrypted so that employees could not identify individual patients, would be used to develop a system for flagging patients at risk of acute kidney injuries, a major reason why people need emergency care. Privacy campaigners…
Content type: Examples
Few people realise how many databases may include images of their face; these may be owned by data brokers, social media companies such as Facebook and Snapchat, and governments. The systems in use by Snap and the Chinese start-up Face++ don't save facial images, but map detailed points on faces and store that data instead. The FBI's latest system, as of 2017, gave it the ability to scan the images of millions of ordinary Americans collected from millions of mugshots and the driver's licence…
Content type: Examples
By 2017, facial recognition was developing quickly in China and was beginning to become embedded in payment and other systems. The Chinese startup Face++, valued at roughly $1 billion, supplies facial recognition software to Alipay, a mobile payment app used by more than 120 million people; the dominant Chinese ride-hailing service, Didi; and several other popular apps. The Chinese search engine Baidu is working with the government of popular tourist destination Wuzhen to enable visitors to…