Each year we prepare our students for the public examinations which will shape their futures. Each year, it seems, we feel let down by the exam boards which create the mechanisms by which our children are tested.
Exams are not a fair and level playingfield, and whilst we have a choice of exam boards and of set texts, they never will be. If this seems harsh, let’s consider how this works. We all choose the board ‘best suited to our students’ – we all are aware of the nuances of the questions, the balance of analysis to comment and so on. Once chosen, we then choose our set texts. Of Mice and Men was hugely popular pre-Gove largely because it is so short. Brevity does not equate with a lack of depth, however, and genuinely great work could be undertaken with this text. When the new model 19th Century texts were introduced, many centres jumped towards Jeckyll and Hyde, again on account of its brevity – yet the dense and complex prose presented a real barrier once the teaching began. All centres need to balance the time available for teaching, alongside the relative strength of the cohort, the availability of texts, the specialisms of the teachers against the level of intellectual challenge sought. There have always been short and long texts. Years ago I could never understand why centres taught Pride and Prejudice when they could have taught Hardy short stories, for example.
Yet the real issue is not size, but the range of questions asked each year. It seems obvious that in an ideal world, the questions will all be of the same complexity and offer the same chance of intellectual engagement across the range of texts available. After all, the same mark scheme applies to each text. Thus if there is a question nuanced in such a way as to make it harder to hit the upper levels of the mark scheme, there is inherent unfairness in exams even within a single exam board’s version of an examination. This issue was neatly clarified for me by Daisy Christodolou in her session on Improving Assessment at ResearchEd London 2017. The simple proof, using fractions, was enough to convince me that the generalised descriptors we all know and love are inherently unreliable when considered across a range of texts, a range of questions even in a single examination.
The issue arose again this year in two examinations of which I am aware: OCR English Litrerature A level Paper 1 and AQA GCSE English Literature Paper 1.
The OCR Shakespeare questions are on a good range of texts: Hamlet, Richard III, Tempest, Twelfth Night, Coriolanus and Measure for Measure. How hard it must be to devise testing questions for each which reflect near enough the same level of difficulty, given the variety of lengths and content of each play. Already there is the possibility of disadvantage in the system simply becaue of the chosen text and the question asked. Yes Hamlet is a monster of a text, but the pay off might be the sheer weight of material written about it which can be used in the teaching. How does Coriolanus fare in this area, I wonder? Probably rather under-resourced in comparison.
So the issue here lies in the first question on the paper, the close analysis of a passage from the play. There is no issue with five of the passages – though the use of the term ‘Clown’ rather than ‘Feste’ might have unsettled some students depending on their editions. However, the issue lies with the choice of Act 4 Scene 4 for the Hamlet extract. This is nothing to do with the inherent quality of the passage, but rather the fact that the passage is not printed in several editions of the play which might be used in schools – notably the Oxford and the RSC editions. The scene is often cut in performance, mainly to give the actor playing Hamlet a well-earned rest, and is missing from several filmed versions. This is no reason not to use it, I would say.
The issue is that of the missing text. OCR have chosen to use a passage to which a number of students will never have had access. There are no recommended editions and schools will tend to use the copies on the shelves (to save the budget) or those favoured by the teachers for a number of reasons. Thus there are students immediately disadvantaged by what I assume is an oversight. A failure to check or to consider the possibility of the impact of editorial decisions on the students sitting the exam. No doubt many coped well and used their knowledge and understanding to write sensible analysis of the ‘unseen’ passage, but the degree to which the paper has unsettled them and affected the writing of the whole examination cannot be quantified.
A Twitter poll which has returned a relatively small repsonse – around 35 teachers – currently suggests that 51% of respondees find their students to have been unable to access the set scene prior to the examination. Even if only one centre has been thus disadvataged it is not good enough. I would suggest the number of students affected is far greater than this. The issue affects a minority of the whole – only those reading Hamlet this year and only a percentage of those – yet they are disadvantaged. Hamlet is often chosen by schools with a strong and highly academic cohort. Should they miss their A* or their A grade, and if a university place for any is missed as a result, how will the board respond? It is a nonsense to pretend that the examination has been a fair test for all.
The AQA exam presented another interesting case by which some students may well have been disadvantaged against others sitting the same examination. In this exam, students answer one question on their novel of choice. They are then required to write in detail about an extract from the novel and then to write about the novel as a whole. All well and good. In this case the issue is not with the extract itself, but with the descriptor of it provided to assist with the thinking process.
In this AQA have referred to the passage as follows: “at this point in the novel, the monster has killed Frankenstein.” This confused many students and many teachers, once the comment became known. AQA have suggested that this is a plausible reading, though some will not agree with it, or words to that effect. In other words they have prefaced the passage with a highly subjective comment which will have unsettle dmany students and therefore possibly affected their repsonse. How irresponsible. Again Edutwitter has buzzed and a poll with over 200 repondents suggests that 90% do not recognise this as an accurate comment on the novel at all. Many are disucssing this on Twitter and exploring the metaphorical sense in which the statement could be viewed as accurate or commenting that it is an A level disucssion, perhaps. What is obvious is that in this carelessly assertive statement – known by the board to be open to disagreement – the board have set a question which might unsettle many students who have studied the text. It is hard to see why they did this. Perhaps an examiner is working out their pet theory, perhaps it is an accident. What is clear is that a public exam should never unsettle a student in this way. AQA offer a choice of 7 texts in this examination. The effect of this statement is to potentially disadvantage those studying Frankenstein in relation to their colleagues. Not only is the text longer than some, and in its epistolary framing rather more complex than some, the board have now sent down this curve ball. I wonder how many sat lookoing at this thinking ‘Mr Chips never told us that’ or ‘WTF, I have misunderstood the whole text…’ before trying to respond in a state of anxiety. Rather more, I imagine, than those who thought ‘Ah! What an interesting reading of the text. What an exciting challenge to my intellectual perception of Mary Shelley’s creation.’
So my point, with no axe to grind – we use the Arden edition of Hamlet and do not enter AQA GCSE, is this: If exams can so clearly disadvantage students by the vagaries of wording or choice against their peers reading different texts in the same exam and therefore working on the same board to the same mark scheme, what hope is there of fair examinations across boards or reliable data collated from exams of previous years?
It seems that exam boards must work harder to ensure that this potential disadvantage is removed. It will be interesting to see what responses they make to the inevitable appeals against marking later this summer.
Re the OCR choice of passage from Hamlet, which was not in the main play in some editions, but was in others, this became a test not across the board but for SOME students (and there lies the inequality) of how they coped with the unseen – but moreover, the absolutely unexpected. Ironically, those students who knew the text of the play best will have been dismayed to find they could not place it. And the marking scheme included points for context.
For some, the dismay turned to absolute panic and upset. One spent precious time writing a note to the examiner. The choice of passage and therefore the question itself was unfair, disadvantaging some students simply because of the well meaning and experienced choices made by their English departments.
Meanwhile the effects of this action have continue. Some students went into the rest of their exams feeling undermined and that the high grades needed for universities, including Oxbridge, have now been lost.
I feel for you and I oils be interested in hearing the outcome of any complaints in this area. It is ridiculous.