There is still a week to go before GCSE results are published so time to avoid replaying last week’s car crash, writes Dr Kevin Stannard, Director of Innovation and Learning, GDST. (Originally published by TES on 17.08.2020.)
So far this has been a syllabus of errors. The first mistake was made almost at the outset. While students of all ages were stranded by lockdown, relying on as much remote support as their schools could offer, those in Years 11 and 13 were told explicitly that with summer exams cancelled, there was no need to do any further study. These cohorts were not just told to stay away from school, they were summarily dismissed.
The DfE’s decision was made in good faith, reflecting concerns that under lockdown, with exams cancelled and reliance placed on teacher estimates, further study would have given a huge advantage to those with the support systems to maintain meaningful study. But little thought appears to have been given to the unintended consequences of the decision. As we saw in Scotland, the result of the methodology put in place to award grades may well have been to disadvantage those very students that the initial demobilisation of two whole cohorts was intended to protect.
In England, the original safety-net plan had been an autumn series of exam sittings, for students to use if they were unhappy with their calculated grades. Yet again, the plan seems like an untested initial idea. Students who followed the DfE’s guidance and furloughed themselves in March will, if they decide to enter the autumn exams, have just a few weeks to get up to full speed from a standing start. Schools will differ in their ability to support them in this endeavour, and students will be distracted from the start of their A level courses, hobbling them for the next set of exams.
The methodology for calculating grades was put in place in a manner reminiscent of Gromit laying down track just ahead of the train. There seems to have been a widespread assumption, not vigorously corrected by the DfE, that grades would be based on teacher estimates. This was never the intention, as Ofqual’s ‘consultation’ made clear. The main driver was always a school’s exam performance in recent years, adjusted to take account of anything known about the ability range of the current cohort. This was always going to risk disadvantaging able students in historically poor-performing schools, and especially those with a steep upward trajectory.
The result of this methodology has produced a grade distribution not massively out of line with previous years (indeed pass rates are a bit higher), and for many individual schools the percentage of top grades may well be close to a three-year average, but for a number of reasons the grades have been allocated unfairly, so that many individuals have not got the grades they deserved.
In small-entry subjects, teacher estimates have carried more weight, because the inevitable swings in grade distribution at a school from year to year meant that statistical modelling was inadequate. Yet in the larger-entry subjects the statistical model predominated, and significant downward moderations occurred, without any clear reason why. So a student’s grades at A level this year have depended on their choice of subject, as well as the school they attended.
Ofqual insists that the vast majority of students this year got within a grade of their teachers’ estimates. This sounds OK at system level, but it means it wouldn’t be at all unusual for a student expecting AAA to receive BBB, with some outliers (a few in each school, but amounting to a sizeable total number) ending up with BBC. Hidden by the overall statistics, the outcome for some individuals has been devastating.
With confidence collapsing, should England have followed Scotland and reverted to the grades estimated by schools? It is useful to think of qualifications as currencies. The effect of restoring estimated grades would have been to markedly increase the number of top grades. This inflation will have taken the immediate heat out of the political storm, but effectively devalues the ‘currency’. With an inflated number of top grades many students in the cohort stand to suffer (universities swamped with ‘too many’ students who met their conditional offers; employers not trusting the 2020 awards), while students in subsequent year groups might understandably feel aggrieved. There is a wider jeopardy too because A level has a very large entry, and it is ‘pegged’ to international qualifications. A loss of confidence in GCE would trigger a wider crisis. There are analogies here with the gold standard, or the status of the dollar under Bretton Woods.
So reverting to the schools’ estimates isn’t the silver bullet. Neither is the very odd idea of using mocks to do something for which they were never designed. It simply adds another layer of randomness.
It may be too late to avoid problems with A level, given the urgency of decisions around university places. But there is still a week to go before GCSE results are published. So time to avoid replaying last week’s car crash. If the argument against using teacher predictions at A level is that grade inflation would fatally damage the qualification’s validity, that doesn’t apply at GCSE. It really doesn’t matter if grades are inflated in one particular year.
Lance the boil: give everyone the GCSE grades their schools predicted. In two years’ time, universities will have to find a way of distinguishing among the larger cohort of well-qualified applicants. Better than those students hobbling themselves by spending next term on retakes rather than getting a good start on their A level subjects.