The 2021 A-Level ‘Teacher Awarded Grades’ – Incomparable with 2019’s but more Valid?

Nearly double the amount of students received top grades in 2021 compared to 2019:

While a politician might try to convince you these two sets of results are measuring the same thing, it’s obvious to anyone that they are not.

The 2021 results are ‘Teacher Awarded Grades’, they are not the same thing as the 2019 exam results (NB this doesn’t necessarily mean the 2021 results are ‘worse’ or ‘less valid’ than 2019s, it might be the the former and all previous years’ results which lacked validity).

The 2019 results measured the actual performance of students under exam conditions, we can call those ‘exam results’.

The 2021 results were ‘teacher awarded grades’ based on some kind of in-house assessment, and marked in-house.

And this difference in assessment and marking procedures seems to be the most likely candidate which can explain the huge increase in top grades.

NB – this means there is no reliability between the results in 2020 and 2021 and all previous results, there is a ‘reliability break’ if you like, no comparison can be made because of this.

This is quite a nice example of that key research methods concept of (lack of) reliability.

The 2019 exam procedure

The 2019 results measured what students actually achieved in standardised A-level examinations –

  1. ALL students sat the same set of exams prepared by an exam-board at the same time and under broadly similar conditions.
  2. It is guaranteed that students would have sat these exams blind.
  3. All exam work was assessed independently by professional examiners
  4. The work was moderated by ‘team leaders’.

What this means is that you’ve got students all over England and Wales being subjected to standardised procedures, everyone assessed in the same way.

The 2021 Teacher Awarded Grade procedure

  1. Schools and teachers set their own series of in-house assessments, no standardisation across centres.
  2. There is no guarantee about how blind these assessments were or any knowledge about the conditions, no standardisation across centres.
  3. Teachers marked their own in-house assessments themselves – in small centres (private schools) this may well have been literally by the same teacher as taught the students, in larger centres more likely the marking was shared across several teachers in the same department, but not necessarily, we don’t know.
  4. There was no external moderation of teacher assessed work, at least not in the case of regular exam based A-levels.

You have to be a politician to be able claim the above two procedures are in the remotest bit compatible!

They are clearly so different that you can’t compare 2019’s results with 2021s, there’s been a radical shift in the means of the assessment, this is a socially constructed process of grade-inflation.

So which is the more valid set of results – 2019s or 2021s?

IF the purpose of grades is to give an indication of student’s ability in a subject then maybe this years results are more valid than 2019s?

I’m no fan of formal examinations, and the one big advantage of 2021 is that there were none, allowing more time for teaching and learning, and less time worrying about exam technique, and probably a lot less stress all round. (the later not the case in 2020).

This year’s assessment procedures would probably have been more natural (had more ecological validity) than a formal examination – it’s hard to get more artificial than an exam after all.

And of course the students are the big winners, more of them have higher grades, and no doubt those that have them are chuffed – and Ive nothing against more young people having something good happen to them, lord knows they have enough problems in their lives now and going forwards as it is!

The problem with the 2021 model is the lack of objectivity and standardisation – we simply don’t know which of those students would have actually got an A or A* under standardised conditions – certainly not all of them, so possibly we don’t know who is the best at exams.

But does the later matter? Do we really need to know who is marginally better at performing under the artificiality of exams anyway?

When it comes the job market further down the line, it’s unlikely that A-level exam performance will have that much baring on someone’s ability to do a job, so maybe it’s better that more students won’t have a string of Cs held against them as would have been the case for the 2019 and previous cohorts.

And someone’s ability to do a job can be determined with a rigorous interview procedure, after all.

The difficult decision is going to be what we do with next year’s results, assuming that exams are re-instated – IF the class of 2022 come out with a spread of results similar to 2019 rather than 2021, that doesn’t seem like a fair outcome to me.

Find out More

The Education Policy Institute has an objective analysis of the 2021 A-level results.

Sociological Perspectives on the 2020 Exam results fiasco

What a mess this years exam results were!

First of all students get awarded their results based primarily on an algorithm, which adjusted center predicted grades up or down depending on how their historical results records.

Then those results were scrapped in favour of the original teacher predicted grades, awarded several months ago, unless the algorithm grade was better!

And finally, amidst all that chaos, BTEC students just get forgotten about with the publication of their results being delayed.

Unfortunately there isn’t a ‘total balls up’ perspective in Sociology, as that would most definitely be the best fit to explain what occurred, and I’m not sure that any one perspective can really explain what’s going on, but there some concepts we can apply….

Marxism

A basic tenet of Marxism applied to education is that the education system tends to benefit the middle classes more than the working classes, and especially the 7% of privately schooled kids compared to other 93% who are educated in state school.s

The algorithm which was used to adjust teacher predicted grades benefitted those from higher class backgrounds more than from lower class backgrounds.

You’ll need to follow the Twitter threads below for the evidence…

The Power of Popular Protest

However, students protested…..

And as we all know, the algorithm was overturned, and we ended up with teacher predicted grades being the basis for results (unless the algorithm gave students a better result of course!).

So in this case the system did try to to screw the working classes, but popular protest managed a small victory.

NB – it’s worth pointing out that Independently schooled kids probably still have better results on average than working class kids, so while this may feel like a victory, it’s maybe no big deal really?

Labelling Theory

I think there’s an interesting application here in relation to teacher predicted grades – clearly teachers have exaggerated these as much as they can, because the results on average are nearly a grade up compared to last year – which is a great example of teachers positively labelling their students in terms of giving them the highest grades they might have achieved.

It kind of shows you that, at the end of the day, teachers are more positive about their students than negative.

For one year only, we’ve got results based on labels, the projections in teachers’ heads rather than being based on objectively measured performance. In some cases over the next year we are going to see the limitations of labelling theory – just because a teacher says someone is capable of getting 5 good GCSEs doesn’t mean they are going to be able to cope with A levels rather than BTECs at college.

Keep in mind that some of the teacher predicted rates are going to be utter fantasy, and not every case is going to end up in a self-fulfilling prophecy – there are going to be a lot of failures at A-level as thousands of over-predicted students can’t cope.

Probably less so at universities – they need the money from tuition fees, so they’ll probably just lower their standards for this cohort.

Functionalism

You may think that this has no relevance, HOWEVER, the system hasn’t collapsed, has it?

There was a bit of a blip, a few people got upset and protested, and now this year’s students have ended up with much better results than last year’s students based on teacher predicted grades which are clearly about as exaggerated as they can get away with.

And now we’re all heading back to college and university and things are going to go back to the ‘new normal’, without anything very much changing, despite the fact that so many flaws have been revealed in how the exam system works.

I’d say this whole fiasco has been a pretty good example of a system coping well with a crisis and coming out the other side relatively unscathed.

Postmodernism and Late Modernism

The extent to which these apply is a bit of a mixed bag….

The government certainly showed a high degree of uncertainty about how to award results, resulting in wide spread chaos, which certainly seems to fit in which the postmodern perspective.

However, that’s about as far as it goes I think…. students and parents alike showed an utter contempt for being ruled by an algorithim, which is one of the primary mechanisms of social control in post/ late modern societies (via actuaralism) – and yet when its workings are brought to light, people resisted – they wanted justice and meritocracy rather than this bizarre way of managing selves.

Also the fact that people actually seem to care about their results and want a sense of justice isn’t really postmodern – it’s a very modernist concern, to be interested in one’s education and future career, and I get the feeling that rather than kicking back and enjoying their postmodern leisure time, students have just been generally worried about their results and their future.

So there’s been a high level of uncertainty and fear/ worry, that’s quite postmodern, but the fact that people actually care about education, that’s more modernist….

‘Results’ Day

Students like to think that their exam results are primarily down their own individual effort and ability (their ‘merit’ if you like), and these are two of the factors which influence their exam results.

However, the results statistics clearly show us that social factors such as parental income, wider social class background, gender and ethnicity clearly impact the results.

To put it in stark terms: being born to middle class Indian parents gives you a much better chance of getting 3 A grades at A-level compared to being born to white working-class parents.

Granted, that within your ‘cultural’ grouping, individual factors such as raw intelligence and ability are going to effect results, in some cases that ability and effort will be so outstanding that some white working class kids will do better than some middle class Indian kids, but on average, social factors effect the results too.

Thus, you could say that we end up skewed, unfair results every year, because the exam results are at least partially measuring class, gender and ethnic background.

The school that pupils attend also has an ‘effect’, on average, with some schools getting persistently good results, mainly the independent schools, a few schools seemingly doomed to failure, and most schools chugging along somewhere in the middle.

However, that said, at least when individual students sit exams, they are assessed by the same standards, and ranked against each other according to those same standards, and they can move up and down from their ‘class/ gender/ ethnicity’ base-average  depending on their individual effort and ability, or lack of either, so in that sense, exams are fair.

What usually happens once all the exams have been marked, according to the same standards, is that the chief examiners look at the spread of results, and then decide what raw mark translates to a pass grade (an E grade), and what amount of raw marks counts for an A* grade.

Generally speaking, the 2 boundaries – U/E and upper A* yield similar percentages each year – in Sociology it’s around a 98% pass rate and a 5% A* rate (NB that is from memory so excuse any inaccuracy), and then within that students receive A-E grades relative to other people, with everyone having sat the same exam.

The 2020 Results Fiasco

This ‘standardisation’ of students sitting the same exam and then those exams being marked according to the same standards didn’t happen this year because students have not sat exams.

Instead, exam results were based on teacher predicted grades , and then modified according to a black-box algorithm, which, as I understand it, took account of factors such as the track-record of the school.

The problem with results being based on teacher predictions

On the face of it, teachers are the ones best place to decide what grades their students would have got, had they sat the exams: they know their students, they have evidence from at least a year’s worth of work.

The problem is that teachers don’t use the same standards to mark work – some are harsh, some are soft, having different theories about the best way to motivate students, so if mark-book grades are to be used as evidence, students are not being assessed in the same way.

A second problem is that teachers will inflate the predicted grades, at least most of them will – it’s a competitive system, so of course you’re going to game the results up as far as you can without the grades looking like a complete fantasy.

Different teachers and schools will have different comfort levels about how far to push these grades. Some would have actually been professional and given accurate grades, so that’s another reason why teacher and institution grades aren’t a great way of awarding results.

However, the strength of this system is that even if teachers have exaggerated results, they should have exaggerated them in line with their perceived effort and ability of their pupils, so at least it takes into account these individual level factors.

Enter the algorithm

Hence why the exams authority moderated the results – they know there is bias between institutions. And at the end of the day, we’ve ended up with overall results which are slightly better than previous years, which seams, on average, a fair way to do it.

By the logic of an algorithm which works on averages, that is fair – for this year’s students, on average, to come out with slightly better results.

Assuming the algorithm has tweaked all the students results in one institution across all subjects to the same degree, we should have fair individual level results too.

The problem

In a nutshell, it’s cases like these….

As I understand it the problem is that some schools especially have been penalised more than others, especially rapidly improving schools, and any school where the teachers have been stupid enough to be honest about predicted grades, their pupils would have lost out massively too.

I’m not sure how representative these case studies are, TBH I think they’re in a minority, but honestly, it’s not great for those students involved!

The Scottish Exam Results: The real losers are last year’s cohort, and the next!

Now they’ve had a day to do some basic analysis of the Scottish exam results the newspapers have had a chance to put their spin on the story – and the narrative runs something like this:

First narrative – ‘Scottish pupils have had their teacher predicted grades lowered by the qualifications authority’.

Second narrative: – Poor Scottish pupils have had their teacher predicted grades lowered more than rich pupils.

Sources

Links to both the above are at the end of this article

This makes for a great story, but I think they might be misleading. As far as I can see, this year’s National Five Scottish students have done better than they would, on average, had they sat the exams.

If you compare the previous years’ results with the teacher predicted grades you get to see how exaggerated those predictions were…..

A comparison of previous year’s results with teacher predicted grades and the actual downward-adjusted grades

All of the data above is from the articles linked below – NB the blue column for the least and most deprived clusters is only 2019 data, A-C pass rate, and the exam results I’m looking are the National 5s, equivalent to the English GCSE.

What’s really going on?

  1. Teachers in Scotland grossly inflated the predicted grades of their pupils, by 10% compared to previous years on average.
  2. They exaggerated the results of the poorest students more than for rich students (bloody left-wing teachers that is!)
  3. The exam authorities modified the results downards, but the results received are still much better than the previous years, showing an improvement.
  4. The poorest students have improved dramatically.

Analysis

It’s highly unlikely that this bunch of students is hyper-successful compared to previous years, so thus unlikely we would have seen an increase in 10% points in the pass rate.

I think the real thing to keep in mind here is what really goes on in exams – pupils sit them, they are marked, and then stats magic is done on them so we end up with a similar amount of passes and grades distribution to the previous years – so it’s hard-wired into exams that little is going to change year on year.

That’s what we’re seeing here – the exam board adjusting to fit the results in with business as usual, but they’ve had to compromise with those optimistic teachers trying to game the system, and as a result, excuse the pun, this year’s Scottish students have done very well, especiallly the poor.

The students who should be angry are last year’s – they’ve lost out relative to this years, next year’s probably too, and those poor mugs actually had to sit their exams, and didn’t get four months off school!

This probably won’t be the way it’s spun in the media – it’s easy enough to find a few students a parents with individual axes to grind, against the overall trend of the 2020 cohort doing very nicely, thank you teachers!

Sources

The Scottish Sun

BBC News

sociology statistics

how many students study A-level sociology? What kinds of results do sociology students achieve?

How many students study A-level sociology in England?

In 2017, there were 32, 269 entries to the A-level sociology exam, up from 26, 321 entries in 2008.

sociology statistics

How does this compare to A-level entries overall?

Sociology has grown in popularity compared to the overall A-level numbers. Overall A-level numbers have increased less rapidly during the same period: from 760 881 in 2008 , to 828355 2017.

It’s probably worth noting that these recent trends actually have a longer history, and are shared by other ‘critical humanities subjects’ such as politics and psychology. Please see this post for a brief summary of some recent research findings from the British Sociological Association on this topic.

What kind of people study A-level sociology?

Research suggests that sociology students are significantly more awesome than students who mistakenly choose not to study sociology. There’s no actual data to back this up, but that’s what the available evidence suggests.

Girls (sorry, ‘young women’) are also more likely to study sociology than boys…. approximately 77% of students studying sociology in 2017 were female, and the proportion of girls to boys has actually increased the last decade.

sociology gender.png

This means that if you’re a straight lad, and you’re relatively nice and mature, then you’ve got more chance of picking up a girlfriend in a sociology class than in pretty much any other subject!

What are my chances of getting an A* in Sociology?

Not great. 

Overall, 4.7% of students achieved an A* in A-level sociology in 2017. 18% 43.8% got an B or above, and 72.9% got a C or above. The pass rate was 97.5%

Sociology A-level results 2017.png

How does this compare to other subjects?

It’s much harder to get an A* or an A in sociology compared to other subjects, a little bit harder to get a B, but your chances of ending up with a C-E grade are about the same as for other subjects.

Boys seem to do much worse than girls in sociology relative to other subjects, perhaps because they’re more distracted (by the girls?)

A level grades 2017

Where does this data come from?

The Joint Council for Qualifications (JCQ) publishes all A and AS level results for all subjects. It show the results by cumulative and actual percentage per grade, broken down by gender.

Just in case you came here looking for information on ‘statistics’ you might like to check out my material on research methods – there’s some pretty good material if you follow the links, even if I do say so myself!

 

Why boys aren’t really catching up girls at A-level

The 2017 A level results revealed that boys beat girls to top grades,  with 26.6% of boys achieving the top grades A-A* compared to 26.1% of girls. This is the first time in years that boys have done better than girls at A level, and suggests that they may be starting to close the ‘gender gap‘ in education.

exam results gender 2017.png

However, such general analysis may actually be misleading, at least according to some recent analysis carried out by statisticians on behalf of Radio Four’s More or Less.

 

Firstly, girls are outperforming boys at all other levels (all other grades) at A Level.

Secondly, a lot more girls do A levels than boys, and it’s problematic to talk about how well boys are doing without taking into account the seemingly higher proportion of boys who have been judged, by virtue of their GCSE results, not to be competent to do ‘A’ levels in the first place.

Finally, if you analyse the results on a subject by subject basis, you basically find that the above data is skewed by the A level maths results.

Maths is the subject with the highest proportion of A-A* grades of all subjects, with nearly 18% of 18% of grades being A or A*, and 60% of exam entries are by boys. Contrast this to English Literature, where 75% of entrants are girls, and only 9% get an A*, and you can pretty much explain the .5% in different in high grades by these two subjects alone.

Overall, girls got more As and A*s in 26 of the 39 A level subjects.

Maybe pulling all of these 39 subjects together and just presenting the overall percentages is not helpful?

 

 

 

Do bad exam results matter?

Results day tomorrow, and I predict that Social Media will be full of comments by celebrities telling students that exam results don’t matter that much because ‘I failed my exams, but I still found success’.

This happened last year during The Guardian’s live chat following the release of  the 2016 GCSE results. The chat even supplied a link to a list of ‘famous school flops‘, which include the big three examples of ‘success despite educational failure’ – Alan Sugar, Richard Branson and Simon Cowell, but I can’t really see the relevance of these examples to today’s youth – all they demonstrate is that white men born before 1960 had a chance of being successful if they failed their exams, hardly representative.

There are a few comments from younger celebrities who claim that getting bad exam results are not the end of the world, because despite bad exam results, they have managed to build successful careers. 

From radio presenter Darryl Morris (no, I’d never heard of him either, although I do recognise him):

Daryll Morris.jpg
Darryl Morris – with 10 year’s of hobby-experience, a cheeky-chappy personality and a lot of luck, you too can be successful, even if you failed your exams!

I missed out on my desired GCSE results because I spent most of my revision time practising at the school radio station. I have no English qualifications and dropped out of a college that reluctantly accepted me to pursue a radio career – now I am a presenter and writer….You don’t need anybody’s permission to be successful – it comes from your passion, commitment and ambition.

From Ben Fogle, presenter of every outdoor program the BBC has made this century:

‘Exams left me feeling worthless and lacking in confidence. The worse I did in each test, the more pressure I felt to deliver results that never came. When I failed half my A-levels, and was rejected by my university choices, I spiralled into a depression.

The wilderness rescued me. I have been shaped by my experiences in the great outdoors. Feeling comfortable in the wild gave me the confidence to be who I am, not who others want me to be… it strengthened my character and set me back on track.’

Ben Fogle.jpg
Ben Fogle – If you’re independently schooled, screamingly middle class and very lucky, then you could also network your way into a TV presenting career, even if you fail your exams

Finally, Jeremy Clarkson tweeted: “If your A-level results are disappointing, don’t worry. I got a C and two Us, and I’m currently on a superyacht in the Med.”

The problem with the above is that every single one of the above examples may well be talented and passionate about what they do, as well as hard-working, but IN ADDITION, they either exploited what you might call ‘alternative opportunity structures’, they networked their way to success, or they were just plain lucky, in the sense of being in the right place at the right time: 

Morris was presenting radio from a very young age, so already had lots of experience by the time he was snapped up by the BBC at 16 – so this guy’s ‘alternative opportunity structure’ was through school and local community radio – a very niche way to success.

TBH I don’t know whether Clarkson networked himself onto Top Gear – but he went to the same fee-paying private school (Repton School) as the executive producer of the program, so even if the old-school tie wasn’t part of it, he would’ve oozed cultural and social capital because of his class background.

As for Fogle not only was he independently schooled (so culturally well prepared for his future at the BBC which is chock-full of the privately schooled), -he was also lucky enough to have been at the right age/ fitted the profile for the BBC’s Castaway 2000 series, which catapulted him into fame, he’s also quite charming, which no doubt helps!

So all these case studies show us is that if you want to be successful, then exam results don’t matter IF you have alternative opportunity structures to exploit, AND/ OR you have sufficient social and cultural capital to be able to be able network your way into a job. 

This important qualification (excuse the pun) to the ‘exam results don’t matter argument’ is backed up by Frances Ryan who points out that such comments tend to come from upper middle class adults, for whom as teenagers, poor exam results mattered less because their parents’ wealth and their higher levels cultural and social capital opened up other opportunities for them.

However, Ryan argues that for teenagers from poorer backgrounds, getting good exam results may well be the only realistic opportunity  they have of getting into university and getting a graduate job, which, on average, will still pay you more over the course of a life time than a non-graduate job.

A classic way in which this inequality of opportunity manifests itself is that wealthy parents are able to support their 19-20 year old teenagers to either do another year of A levels, or an access course, or an unpaid internships for a few months or a year to give them a second chance, poorer kids don’t have these options, not unless they want to go into crippling levels of debt.

So – do bad exam results matter? Judging by the analysis above, it matters more if you’re from a working class background because education and qualifications provide the most likely path way to social mobility…..but less so from an upper middle class background.

Having said all of that, if you’ve woken up to the idea that a normal life is basically just a bit shit, then exam results don’t really matter at all. Trust me, jobs aren’t all that! Why not try one of the following alternatives instead:

  • Do voluntary work
  • Become an eco-anarchist
  • Become an artist
  • Go travelling
  • Go homeless
  • Become a monk
  • Live with your parents for the rest of your life.
  • Learn to live without money.

For more ideas about alternative career paths, you might like this post: alternative careers: or how to avoid working for a living.