‘Results’ Day

Last Updated on August 14, 2020 by Karl Thompson

Students like to think that their exam results are primarily down their own individual effort and ability (their ‘merit’ if you like), and these are two of the factors which influence their exam results.

However, the results statistics clearly show us that social factors such as parental income, wider social class background, gender and ethnicity clearly impact the results.

To put it in stark terms: being born to middle class Indian parents gives you a much better chance of getting 3 A grades at A-level compared to being born to white working-class parents.

Granted, that within your ‘cultural’ grouping, individual factors such as raw intelligence and ability are going to effect results, in some cases that ability and effort will be so outstanding that some white working class kids will do better than some middle class Indian kids, but on average, social factors effect the results too.

Thus, you could say that we end up skewed, unfair results every year, because the exam results are at least partially measuring class, gender and ethnic background.

The school that pupils attend also has an ‘effect’, on average, with some schools getting persistently good results, mainly the independent schools, a few schools seemingly doomed to failure, and most schools chugging along somewhere in the middle.

However, that said, at least when individual students sit exams, they are assessed by the same standards, and ranked against each other according to those same standards, and they can move up and down from their ‘class/ gender/ ethnicity’ base-average  depending on their individual effort and ability, or lack of either, so in that sense, exams are fair.

What usually happens once all the exams have been marked, according to the same standards, is that the chief examiners look at the spread of results, and then decide what raw mark translates to a pass grade (an E grade), and what amount of raw marks counts for an A* grade.

Generally speaking, the 2 boundaries – U/E and upper A* yield similar percentages each year – in Sociology it’s around a 98% pass rate and a 5% A* rate (NB that is from memory so excuse any inaccuracy), and then within that students receive A-E grades relative to other people, with everyone having sat the same exam.

The 2020 Results Fiasco

This ‘standardisation’ of students sitting the same exam and then those exams being marked according to the same standards didn’t happen this year because students have not sat exams.

Instead, exam results were based on teacher predicted grades , and then modified according to a black-box algorithm, which, as I understand it, took account of factors such as the track-record of the school.

The problem with results being based on teacher predictions

On the face of it, teachers are the ones best place to decide what grades their students would have got, had they sat the exams: they know their students, they have evidence from at least a year’s worth of work.

The problem is that teachers don’t use the same standards to mark work – some are harsh, some are soft, having different theories about the best way to motivate students, so if mark-book grades are to be used as evidence, students are not being assessed in the same way.

A second problem is that teachers will inflate the predicted grades, at least most of them will – it’s a competitive system, so of course you’re going to game the results up as far as you can without the grades looking like a complete fantasy.

Different teachers and schools will have different comfort levels about how far to push these grades. Some would have actually been professional and given accurate grades, so that’s another reason why teacher and institution grades aren’t a great way of awarding results.

However, the strength of this system is that even if teachers have exaggerated results, they should have exaggerated them in line with their perceived effort and ability of their pupils, so at least it takes into account these individual level factors.

Enter the algorithm

Hence why the exams authority moderated the results – they know there is bias between institutions. And at the end of the day, we’ve ended up with overall results which are slightly better than previous years, which seams, on average, a fair way to do it.

By the logic of an algorithm which works on averages, that is fair – for this year’s students, on average, to come out with slightly better results.

Assuming the algorithm has tweaked all the students results in one institution across all subjects to the same degree, we should have fair individual level results too.

The problem

In a nutshell, it’s cases like these….

As I understand it the problem is that some schools especially have been penalised more than others, especially rapidly improving schools, and any school where the teachers have been stupid enough to be honest about predicted grades, their pupils would have lost out massively too.

I’m not sure how representative these case studies are, TBH I think they’re in a minority, but honestly, it’s not great for those students involved!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from ReviseSociology

Subscribe now to keep reading and get access to the full archive.

Continue reading