Are ‘staff predicted grades’ an art?

17th May 2017

Staff are regularly asked to predict grades for students both at KS4 and KS5. Intervention and mid-year school improvement plans are all based around these grades. But how confident are you that the grades predicted by staff are accurate – in fact how do you define ‘accuracy’ in this context?

‘What should the predicted grades be based on?’ is a regular subject of our training, balancing the needs of objective and subjective evidence. Whatever your methodology, how can SLT be confident of the grade profile submitted by staff?

Many schools now request an Alps Monitoring report towards the end of the year based on staff predicted grades. These are not intended to identify students for intervention but are a ‘final chance’ to capture the likely outcomes of students to compare to the actual results. This data capture must be seen as a CPD exercise rather than a punitive assessment of staff’s ability to predict student outcomes.

I have had many discussions over the years about what student progress grades should be based on. Some would argue that grades should be based on a robust assessment. But what does that mean? The selecting of questions for the test is subjective, in terms of what material has been taught, and the time in the year the test is set. We all know students do not revise as much for a test or mock as they do for the actual exams. Certainly, when schools or colleges request an Alps report based purely on mock grades, they look very different to monitoring points either side or the last year’s results. Mocks are important as a key marker to help students understand where they are at, but are only one part of the picture in predicting the likely grade a student will achieve at the end of the course.

My preferred methodology is:

  1. Take the class list and any evidence (including the tracking document grades) you have.
  2. Rearrange the list in rank order from best to worst.
  3. Put in grade boundaries. Look to your results over the last three years.
  4. Consider each student individually, and adjust any grades from the above starting point accordingly.
  5. Further adjust the grade if necessary, during a discussion with the student.

Curriculum leaders should then quality assure the grades. Are these realistic across the group? Are some staff more optimistic or pessimistic than others? How do shared teaching groups compare?


There seem to be two schools of thought on timing

1) When students are going on study leave – i.e. all the content has been covered, final internal assessments completed alongside lots of past paper exercises.

2) After the students have sat their exams – often called ‘Post Exam Predictions’ (PEPs). This is the point where staff have seen the paper, have talked to some of the students, have a feel for how much support the students have asked for during the final weeks, and basically there is no more possible evidence available.

So, once we have a set of these final predicted grades and the student actual results, how do we ‘measure’ the accuracy? Is it the average difference in grades? If so, what level of difference is acceptable?

One option is to ask each teacher to compare their two Alps subject pages (remember this is a CPD exercise). A score of 0.98 on the final monitoring point subject page compared to 0.93 on the results subject page, gives a difference of 0.05. Which means that the teacher has, on average, overestimated each student by a quarter of a grade (0.2 on any Alps A level subject thermometer is equivalent to a grade).

From this, staff can look at individual student names and decide which were possibly harder to predict – after all, they are teenagers, who do behave unpredictably at times!

The aim will be how can I get better at predicting outcomes for the next cohort of students?

We will never get them all correct, but can we improve our own techniques for assessing likely outcomes of our students. This is the challenge, knowing which students to cajole, and which to give space to as they are best ‘left alone’, and is a key part of all of our professional development.


Need more information?

If you would like any further information, please contact one of our expert advisers.

Get in touch