The 2021 A-Level ‘Teacher Awarded Grades’ – Incomparable with 2019’s but more Valid?

Nearly double the amount of students received top grades in 2021 compared to 2019:

While a politician might try to convince you these two sets of results are measuring the same thing, it’s obvious to anyone that they are not.

The 2021 results are ‘Teacher Awarded Grades’, they are not the same thing as the 2019 exam results (NB this doesn’t necessarily mean the 2021 results are ‘worse’ or ‘less valid’ than 2019s, it might be the the former and all previous years’ results which lacked validity).

The 2019 results measured the actual performance of students under exam conditions, we can call those ‘exam results’.

The 2021 results were ‘teacher awarded grades’ based on some kind of in-house assessment, and marked in-house.

And this difference in assessment and marking procedures seems to be the most likely candidate which can explain the huge increase in top grades.

NB – this means there is no reliability between the results in 2020 and 2021 and all previous results, there is a ‘reliability break’ if you like, no comparison can be made because of this.

This is quite a nice example of that key research methods concept of (lack of) reliability.

The 2019 exam procedure

The 2019 results measured what students actually achieved in standardised A-level examinations –

  1. ALL students sat the same set of exams prepared by an exam-board at the same time and under broadly similar conditions.
  2. It is guaranteed that students would have sat these exams blind.
  3. All exam work was assessed independently by professional examiners
  4. The work was moderated by ‘team leaders’.

What this means is that you’ve got students all over England and Wales being subjected to standardised procedures, everyone assessed in the same way.

The 2021 Teacher Awarded Grade procedure

  1. Schools and teachers set their own series of in-house assessments, no standardisation across centres.
  2. There is no guarantee about how blind these assessments were or any knowledge about the conditions, no standardisation across centres.
  3. Teachers marked their own in-house assessments themselves – in small centres (private schools) this may well have been literally by the same teacher as taught the students, in larger centres more likely the marking was shared across several teachers in the same department, but not necessarily, we don’t know.
  4. There was no external moderation of teacher assessed work, at least not in the case of regular exam based A-levels.

You have to be a politician to be able claim the above two procedures are in the remotest bit compatible!

They are clearly so different that you can’t compare 2019’s results with 2021s, there’s been a radical shift in the means of the assessment, this is a socially constructed process of grade-inflation.

So which is the more valid set of results – 2019s or 2021s?

IF the purpose of grades is to give an indication of student’s ability in a subject then maybe this years results are more valid than 2019s?

I’m no fan of formal examinations, and the one big advantage of 2021 is that there were none, allowing more time for teaching and learning, and less time worrying about exam technique, and probably a lot less stress all round. (the later not the case in 2020).

This year’s assessment procedures would probably have been more natural (had more ecological validity) than a formal examination – it’s hard to get more artificial than an exam after all.

And of course the students are the big winners, more of them have higher grades, and no doubt those that have them are chuffed – and Ive nothing against more young people having something good happen to them, lord knows they have enough problems in their lives now and going forwards as it is!

The problem with the 2021 model is the lack of objectivity and standardisation – we simply don’t know which of those students would have actually got an A or A* under standardised conditions – certainly not all of them, so possibly we don’t know who is the best at exams.

But does the later matter? Do we really need to know who is marginally better at performing under the artificiality of exams anyway?

When it comes the job market further down the line, it’s unlikely that A-level exam performance will have that much baring on someone’s ability to do a job, so maybe it’s better that more students won’t have a string of Cs held against them as would have been the case for the 2019 and previous cohorts.

And someone’s ability to do a job can be determined with a rigorous interview procedure, after all.

The difficult decision is going to be what we do with next year’s results, assuming that exams are re-instated – IF the class of 2022 come out with a spread of results similar to 2019 rather than 2021, that doesn’t seem like a fair outcome to me.

Find out More

The Education Policy Institute has an objective analysis of the 2021 A-level results.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.