How clear is your crystal ball?
The accuracy of teachers’ predicted grades is most important for the student. Unless teachers use their professional judgement to help students understand where they currently are, and more importantly what they need to do to get to where they want to be, students will not be able to fulfil their ambitions in their post 16 studies.
But what do we mean by accurate? And how early in a student’s academic career should staff be asked to ‘predict’ results? Since the abolition of Levels for monitoring progress at KS3, schools have spent an extraordinary amount of time working on flight paths and alternative systems. Considering even in Spring 2017, nobody has any idea how Ofqual will ‘manage’ raw paper results in 3 months’ time into a comparable set of 9-1 grades, how can schools possibly map out a flight path for a year 7 students sitting GCSE in 5 years’ time? There is about as much certainty as there is about how many Education Secretaries there will be in the same period!
So, putting students at the centre of the process, how accurate should we be? There is a clear crunch point around a grade 5 in maths and English (will a 4 for current year 11’s be considered ‘second class’ by the time they reach 18?) but do we need to penalise staff if a 2 becomes a 3? Let’s stay focused on the student, intervene where we think they can do better, make sure assessments are robust and fair – above all inspire and motivate the young people in our care for the final push over the next three months.
Staff performance management both at KS4 and KS5 this summer will need to have a greater tolerance, as there are many unknowns with reformed subjects. But this doesn’t mean we should be abandoning clear and objective scrutiny of the overall progress made by students in the summer. So, what is an acceptable level of accuracy for staff predicted grades compared to final results? Is it 95% or 80% – per class, or per teacher? This methodology encourages teachers to be unambitious with student target grades.
Let us change the focus – let us devise Minimum Expected Grades (MEGs), based on a national benchmark, and encourage staff (and students) to regularly review these, personalise and revise them upwards, safe in the knowledge that any staff performance management will be based on the original benchmark. The published national MEGs are a starting point for students and soon become less relevant to them as they work towards their personalised subject goals. Staff performance management should be kept separate from student targets and as it is based on the original benchmark, staff are given freedom and encouraged to raise student aspirations and their target grades as the course progresses.
If we do the best by our students, results and performance tables look after themselves – after all wasn’t the reason we came into teaching to inspire and watch young people grow?