NEAs at A level: What’s the big secret?

Enough time has passed since August for me to write this post objectively.

I am concerned at the lack of transparency around the moderation of A level NEAs. I won’t name my board – I assume all boards are the same, but i think this is a serious issue and one which is potentially harming our students.

The Background:

My recent cohort of Upper 6th students had some very strong students, hoping for A*s to get their places at top universities. It also had a typical spread of students of less intellectual weight for whom marks were equally important if not so headline grabbing. On results day we found their NEA marks reduced by a factor of 25% across the spread. The subsequent hit on grades was severe with one student who scored  a whopping 99% in the exams failing to get an A* which he so obviously had earned. Ditto one at 97%.

The moderator’s report was lacking any detail and offered generalised comments along the ‘several essays…’, ‘some candidates…’ lines. No help at all. No sense of where or why the individual papers had been marked down. Nothing.

We appealed for review and the marks went up – not fully, but now the markdown was only around 8% at the top end.  2 A*s were restored. The new moderator was much better – candidate numbers cited to evidence issues and more precise comment over 2 sides. But, nothing specific. Nothing which we could look at and say that we needed to address.

This is my issue.

Let’s assume that this happens to a department. We are an honest group of teachers who marked as we saw fit – fair, maybe slightly positive in outlook – certainly rewarding what was there, rather than penalising what was not. Our wish for this year and moving forward is, therefore, to learn from this and ensure it does not happen again. But the utter lack of transparency in this 20% of the examination denies us the chance to do this.

The general issues were not unknown to us: This portfolio was long, this one lacked good AO5, this one had no bibliography despite numerous attempts to get the student to write one… we knew this and felt we had considered the issues in our marking. Our students get one draft and one final copy with sections being seen between the two with no written feedback. Once the final version is in, we do not give another open ended row of chances to alter and improve the work – I believe this maintains the spirit of the exam board whilst giving the student as much support as we can. This will lead to occasional rough edges, but rarely so as to detract form the line of argument or to radically alter the quality of the essay. I checked with the board, and the English adviser agreed with me, confirming the notion of marking to award an holistic overview to define the band and to move the essay around within the band depending on the AO coverage.  We thought we had done this.

I can’t explain how stressed and personally unsettled this made me – let alone how the students must have felt. I am desperate not to let it happen again.


The portfolio contains two essays, marked to different AOs by different teachers and using a wide range of different texts. They do not relate neatly to each other and should not be treated as though they do. Yet the board applies an algorithm to work out a blanket deduction based on the marking of a sample. Thus no student is receiving considered marking or remarking, but a one size fits all mark despite the massive variation in content and original marker.

We receive our essay from the moderator without written comment. In the examination the examiner marks the essay and we can see on the recalled text the outline of where they think the marks have been awarded – we can begin to make deductions about where the student has erred and succeeded. On the NEA scripts, nothing at all. This means that my scripts cannot be used in any way as teaching aides this year – I simply do not know where the marks have been taken from in any individual essay. Did the four marks go from one essay or was it three from one and one form the other? I do not know and thus cannot accurately present an essay to a class with confidence when discussing the marks awarded.

This is ridiculous.

Even after the appeal which partially sustained my contention that the original moderator was far too savage in the treatment of the work, I still do not know whence the penalties came.  I need to be able to quantify the specific issues relating to specific elements of the writing in individual essays, yet the system does not let me do this. Instead it offers me generalised comment, some specific criticism and a new mark but with no breakdown or annotations.  Utterly useless for any teaching purposes.

What is the big secret? Why do moderators not ‘mark’ the essays to show where they are in disagreement with the original marking? That is what we need. Without this the moderator is offering a critique of our marking without any clear evidence for their decision, nor with any sense of responsibility when in error. Our first moderator cost my students two A * grades. I have no idea why. I also have no idea why the second moderator disagreed with this assessment of our marking and came back towards us so strongly. This is simply unacceptable, in my view, if we are to assist our next cohort in their preparation

I am aware that marks can go up as well as down  – I have done the dance of the NEA since 2005 for AQA, WJEC, Edexcel, OCR and the IBDP.  I get it. If my marks have to go down, I accept it (there is a first time for everything). What is so hard to stomach is the lack of transparency. Even after appeal I am no clearer precisely where we (I) overmarked. This means I am likely to do so again, not because I am cheating, but because I genuinely don’t know why we were penalised and at what point in the essay.

Please, let’s have some clarity. Let us see what the process is rewarding and penalising. Tell moderators to annotate the scripts and in the event of change, remark the whole set –  no A level student should have their destiny decided by a one size fits all algorithm.