News article by Cath Everett.
Published by Diginomica.
SUMMARY: The UK government ignored its own guidance and refused external help, resulting in an A level algorithm disaster that disproportionately impacted students from disadvantaged backgrounds. But what exactly went wrong and what are the implications?
The reputation of UK government technology projects – and by extension that of the government itself – has been hard hit lately, not least due to last week’s A-level exam results disaster.
Firstly, there have been the seemingly endless development U-turns and delays to the launch date of the Covid-19 contact tracing app.
Then there was the Home Office’s decision earlier this month to scrap a controversial artificial intelligence (AI) system that had been in use since 2015 to process visa applications, ahead of a judicial review requested by charity the Joint Council for the Welfare of Immigrants. Campaigners claimed the software created a “hostile environment” for migrants and was biased in favour of white applicants.
But the third and most embarrassing debacle of all is undoubtedly the A level exam results fiasco. Even though exam regulator Ofqual had been warned by external advisors, including the United Learning schools trust, that the mathematical (rather than AI) algorithm being used to assess results was “volatile” and risked providing erratic outcomes, it pressed ahead anyway under ministerial pressure to prevent grade inflation. [ . . . ]