For the past quarter century I have watched with interest the annual ritual of the examination results season. There are a number of basic approaches used by politicians when questioned about the outcomes. All start by congratulating candidates on their hard work, and the results they have achieved. They then either express concern about the level of the outcomes, often harking back to some previous ‘golden age’ or they complain that too many have achieved the top grades and hark back to some previous ‘golden age’. Either way the present is always seen as in need of reform to meet the standards of the past. In recent years, the past has been replaced to some extent by reference to other education systems. Often our system is seen as ‘falling behind’ the best in the world.
One by-product of this political imperative for ‘improvement’, in whatever guise it takes, is a desire among some politicians to re-introduce a norm referencing system. This is where each year a set proportion of entrants to an exam receive the top grade, and most candidates are clustered around the middle grades. At its crudest, half are above average and half below average. Of course, more than half are generally below average as it is not normally possibly to control exactly for the numbers those who are ill on the day or fail to turn up for some other reason.
The alternative system used in recent years is based upon achievement of candidates against expected outcomes. Under this system, familiar to most adults through the driving test, anyone can pass if they achieve the appropriate level. So, theoretically, the top grade is open to all. However, by determining the standard of the questions the chances of that happening are unlikely. Indeed, standards can be raised by making the test harder, as has happened with the driving test with the addition of the theory test, and a wider range of practical tests to meet for challenging road conditions. Such changes make comparison between years difficult, if not impossible.
In reality, only in English and Mathematics are any forms of comparison really possible as it is only these two subjects that are studied by all pupils. In other subjects, the decisions about who studies them, and who is entered for an examination, can influence the outcomes.
Take two GCSE subjects for England in the provisional results for 2013. The cumulative outcomes were:
Now decide which set of results is for Physics and which for Media Studies. To help you there were 152,152 entries in subject A, and 55,005 in subject B. Another possible clue is that there is probably more of a shortage of Physics teachers than or Media Studies teachers. So, that’s clear then, subject A is Media Studies, and subject B is Physics. Well no, actually it is the other way around. 90% of entries in Physics received an A*-C grade compared with just two thirds in Media Studies. It is worth reflecting that under a norm referencing system far fewer would have received the top grade in Physics, but more would probably have done so in Media Studies.
Do we now make Physics GCSE harder, even if it means fewer study it to GCSE, or do we make Media Studies easier or is there a good reason why the outcomes are so different? I don’t know the answer to that question. Despite there being three times more entrants in Physics than in Media Studies, perhaps only those likely to succeed are entered for the subject, whereas anyone studying Media Studies takes the examination. That may explain why only 0.1% of those who took Physics received an unclassified grade compared with 1.3% of the entrants in Media Studies.
In the end, an examination system has to be fit for purpose. What that purpose is must be clear to all. With the participation age for education now increasing to 18 over the next few years, it might be worthwhile asking what purpose is served by an expensive external examination at 16.