Welcome to the Brighter Thinking Pod from Cambridge – the podcast that brings you advice and conversation from authors, teachers and academics. Today, we’re going to be taking a closer look at predicted grades – dispelling misconceptions and giving advice for how your school can create an effective predicted grades policy.
Our host is Laura Kahwati, Education Manager for Thought Leadership at Cambridge. She is joined by two special guests. Simon Child is the Head of Assessment Training at the Cambridge Assessment Network and Kevin Ebenezer is Head of Global Recognition at Cambridge.
As well as the audio below, which you can play from the page, you can listen to this and other episodes by going to the website, Spotify, or Apple Podcasts.
Please like, subscribe and review if you like what you hear, it really helps us to reach more teachers with our podcast.
Ep 46: How Schools Can Create an Effective Predicted Grades Policy
Laura Kahwati: Hello. Welcome to the Brighter Thinking Pod from the International Education Group of Cambridge University Press and Assessment. I’m Laura Kahwati, Education Manager for Thought Leadership at Cambridge International, and I’ll be your host today. We created our Brighter Thinking Pod to support teachers around the world. Each episode brings you helpful advice and interesting conversation from some of our authors, teachers, and academics.
Today, we’re going to be taking a closer look at predicted grades, dispelling misconceptions and giving advice for how your school can create an effective predicted grades policy.
For this episode, we are joined by two special guests. Returning to the podcast, we have Simon Child, who featured in the last series on our assessment episodes. Simon is the Head of Assessment Training at the Cambridge Assessment Network, and Co-Course Director for the Postgraduate Advanced Certificate in Educational Assessment at the University of Cambridge. He’s the author of the book, The What, Why, and How of Assessment: A Guide for Teachers and School Leaders. Hello, Simon.
Simon Child: Hi, Laura.
Laura Kahwati: He is joined by Kevin Ebenezer. Kevin is head of Global Recognition at Cambridge. In his role, he collaborates extensively with ministries, universities, governments, and various stakeholders to secure recognition for Cambridge qualifications. So, hello Kevin.
Kevin Ebenezer: Hi, Laura.
Laura Kahwati: Remember, all the links and the info that we discuss are available in the show notes for your ease. And if you want to get your voice heard on the show, you can get in touch on X (Twitter) or Instagram at CambridgeINT. We begin each show with an icebreaker to help our listeners get to know our guests more. For this episode, keeping with our predictions theme, our question is please tell us about your worst ever prediction. And Simon, if it’s okay, I’d like to start with you.
What is your worst ever prediction?
Simon Child: Thanks, Laura. But I’d like to caveat this as I was very young at the time when I made this bad prediction, but… And bad predictions often emerge from some form of naivety or blind faith in something, and my prediction actually is no different to that, to be honest. It’s football-based.
So when I was very young, I predicted that my football team, Blackburn Rovers, who had just won the league in 1995, fired to victory by Alan Shearer amongst some other players as well, I predicted that they would go on to retain their title for the next five years. And not only did that spectacularly not happen, the football team were actually relegated from the Premier League within that five-year period.
Laura Kahwati: Oh, no!
Simon Child: So it went spectacularly badly, my prediction on that front.
Laura Kahwati: My goodness. Well, I’m sorry to hear that, but thank you for such a confessional icebreaker answer.
Simon Child: Thank you.
Laura Kahwati: Kevin, please, can we move on to you? Could you tell us about your worst ever prediction?
Kevin Ebenezer: Well, Simon spoke about football. The other prediction is the weather, and mine is about the weather. Again, blind faith because I trusted the BBC more than I trusted the Belgian news. And I was living in Belgium at that time, and I was at a conference, I was at a hotel, listened to the news in the morning, and saw the weather. The Belgian news predicted hurricanes, the British news BBC predicted strong winds. So I chose to drive home, and at that night, there was a hurricane, I had to stop the car, and sleep in a layby on the road.
Simon Child: Oh, no.
Kevin Ebenezer: So, yes, bad prediction from me!
Simon Child: I think it was the worst storm in UK history at the time, wasn’t it?
Kevin Ebenezer: Yeah, I think it was. And yeah, the weatherman was Michael Fish, and he became an iconic figure.
Simon Child: For making bad predictions. Yeah.
Kevin Ebenezer: For making bad predictions. Yes!
Laura Kahwati: That was really unfortunately linked to the word icebreaker, actually. I’m so sorry to hear that. That’s quite a disastrous bad prediction. However, I think you can forgive yourself that it was sort of indirect. I mean, you were following a trusted source.
Kevin Ebenezer: Yeah, that’s true.
Laura Kahwati: But I guess it doesn’t always go right. Do you keep pillows and a blanket in your car from now on?
Kevin Ebenezer: No, I don’t trust the BBC!
Laura Kahwati: Oh, dear. Thank you both very much for those icebreaker answers. Moving on to more serious business of predicted grades, please can I start with you, Kevin, with my first question? Can you explain to our audiences what we mean when we talk about predicted grades?
What do we mean when we talk about ‘predicted grades’?
Kevin Ebenezer: Well, from a recognition point of view, predicted grades refer to the estimated academic performance of a student in a particular subject as forecasted by teachers, departments, or schools. These grades are often used for various purposes, mainly for admission into universities, but also for scholarship applications or even guide students on their educational journeys.
Because if you have predicted low grades, you may not want to apply to a certain university where you don’t have a chance of getting in. So it is a really important thing that we try to get the predictions right.
Laura Kahwati: So you are always thinking about that next step beyond predicted grades and how much they can influence those steps.
Kevin Ebenezer: Yeah, and they influence the outcomes for those students as well.
Simon Child: Yeah, Kevin’s definition, I agree with, and sometimes called a predicted grade, sometimes called a forecast grade, and there’s lots of different reasons why they are useful or potentially useful to students, teachers, universities, and so on. So I think in some ways it’s often a requirement as part of the application process, as Kevin mentioned, for a student applying for further learning.
But I also think that there’s the idea of the information is requested by boards such as Cambridge for the purpose of maybe dealing with unusual circumstances for students who are taking examinations. Maybe they’re ill on a particular day. Predicted grades can help with that support that Cambridge offers. But also it can be used in some cases as a kind of motivation or inspiration for the student as well.
So thinking about how predicted grades function in that respect, it’s something that hopefully we’ll explore later on in the podcast as well. And largely that’s part of the idea of reflecting on a student’s current progress. So that can often be a reason, for example, school reporting or accountability, seeing the progression of a student between one stage of learning and another.
Laura Kahwati: That’s really interesting because obviously there’s the importance of the accuracy around what kind of progress we might assume a child to make, but those more emotional and psychological side of things, like the idea that a student might be inspired to do something, like Kevin said, in terms of the next step for the future of their education.
Simon Child: Yeah, it’s often sometimes the predicted grade is actually termed to the student as an aspirational grade, and that’s an interesting labeling that’s given to that. So it’s the idea of that if you do well, if you work hard, you can achieve this thing. And for students that are what would call extrinsically motivated, need to have that push and that idea of what they could achieve, and predicted grades do serve that purpose in many cases.
Laura Kahwati: So some children may be intrinsically motivated, whereas others might actually have such a difference made to them by having that external aspiration.
Simon Child: Exactly. Yeah.
Laura Kahwati: Can I ask you both, and I will start with you for this one, Simon, how important is it to make sure that predicted grades are fair and equal, given all of those things that we’ve just talked about that have to be taken into account? In other words, why not just predict everyone an A star?
Simon Child: Well, in some ways, the importance of the predicted grades generally depends on the reasons for why they’re being generated in the first place.
So when I was a student, way back in 2004, when I was doing my A Levels, we submitted our predicted grades to UCAS, and that was actually part of the application process. So that was giving the university’s information about where I sat within the cohort in some way. So the high-stakes nature of those predicted grades meant that the grades themselves were often overestimated.
And some research from Cambridge would suggest that up to 65% of predicted grades made by teachers are usually an overestimation of the actual performance that a student has at the end of their studies. And this might not actually be too much of a problem in terms of fairness and so on, as long as we’re aware of that particular trend of why teachers may be over-predicting their students.
But what’s perhaps more problematic is when we delve into how those predicted grades are made, and that’s where the fairness point comes in a little bit.
So Ofqual did some research and they found… it was around 2021 that they found a bit of evidence of biases in terms of gender, in terms of people with special educational needs, or for people from disadvantaged groups. So if it’s a high-stakes type prediction, like A Level, those biases may actually be disadvantaging some subsets of students. So making sure that they’re as fair as possible and that the processes in place are as fair as they can be is really important for those different groups.
Laura Kahwati: Absolutely, yes. No groups should be treated any differently to any others. And I presume that highlights even more why it’s important for schools to know who the groups are that might be disadvantaged so that you can think ahead before you get too late in the process of predicting their grades.
Simon Child: Exactly. Yeah.
Laura Kahwati: Kevin?
Kevin Ebenezer: Yeah, I mean, Simon, just to let you know, UCAS still uses predicted grades, and they make offers on it. And these are conditional offers, and if a student doesn’t make those grades, in some cases, they will not get that place. And this is really important for students, especially our students overseas, because it’s really important that they do get the place so they can work on their visas, et cetera.
So going into clearing at a later stage makes it a lot harder for students. So getting those predictions right is very important for students, especially coming from overseas. And what we find also is that if students grades are over predicted in some courses, then Cambridge students’ reputation may go down by universities, because if everybody over predicts, then they’ll say, oh, this is from Cambridge, so it can actually damage the grading system for Cambridge. I think fairness and predicted grades are essential for fostering competitive educational environment. And if everyone is predicted an A star, it would diminish the credibility of that A star.
Laura Kahwati: I see. So actually, the credibility of a predicted grade goes way beyond the process of predicting rather than just leading up towards it.
Kevin Ebenezer: Yeah.
Laura Kahwati: Simon, you just picked up on bias, and that really highlights the importance of getting predicted grades right and making them credible. What information can teachers use in order to make sure that their predicted grades and the processes that lead up to them are as robust as possible?
What information can teachers use in order to make accurate predicted grades?
Simon Child: In most cases, Laura, there’s a mix of what we would call data-based and professional judgments in making predicted grades for students. So the first general area would be some kind of statistical information that we’ve generated, and that kind of data may come from things like benchmarking-type assessments or more recent assessments administered by teachers.
And one important thing to consider when making predicted grades using that kind of information is the weighting of these different sources of statistical evidence that may be produced or used by schools. So for example, you may be tempted to use a large-scale benchmarking assessment such as the ones offered by the Cambridge Centre for Evaluation and Monitoring to make a prediction for a student. So let’s call them student X, just for instance. Those types of benchmarking assessments are very effective at giving a sense of how students like student X are likely to perform in a later assessment, but there are individual differences that need to be considered specific to that particular student.
You could also use assessment performance. So for example, previous grades in GCSE or other formalised assessments could be useful information, and generally, giving more weight to recent performances and progressive students is key for using that kind of information. So for example, some research from the research team in Cambridge found that most teachers use some kind of form of mock examination to confirm predictions that may be coming out from assessment performance data.
And then finally, it’s the kind of sense of making some kind of in-class judgment. So for example, as a teacher, you may be getting a perception or an understanding of the motivation of students and their interest in the subject and area that they’re studying, and also the day-to-day quality of their work. So those are really the three main areas that I can think of that teachers can use to make predicted grades.
Laura Kahwati: It really reminds us how all of those assessment processes that go on all the time at various school stages are so important, whether it be mock examinations or class tests. And not only that, but that teachers are consistent as well. Kevin, you have a lot of correspondence with schools globally in this area. What have you found based on what Simon just said?
Kevin Ebenezer: Yeah, I think Simon listed quite a few things that I see from schools. Mainly schools will use class performance, mock exam results, homework, IGCSE results for A Levels, but also I think that more holistic idea, when they know the student, so they can actually look at things like the student’s work ethic, that engagement, all these sort of things will contribute to the teachers making that professional judgment to provide a good predicted grade for the student.
Simon Child: And just to add to that, Kevin, there’s the idea of that teachers, from the research anyway, indicates that teachers are often thinking about students and how they’d perform on their best day. And that’s an interesting part when we’re making predicted grade, and that was part of the explanation for why they’re often over-predicted because we’re optimistic about our students and how they will perform in the moment. So all that information about how they do in things like classroom-based assessments and so on is useful information in terms of giving the sense of how they would perform on any given day, which is really interesting.
Laura Kahwati: It’s such a difficult call to make, I guess, unless you have enough information there to be really confident, because otherwise you’re having to weigh up so much in terms of what is accurate, what is aspirational. Of course, we all want to be optimistic, but we also have to be reliable as well. So such a careful balance.
Kevin Ebenezer: Yes. It’s not an exact science. It is professional judgment.
Laura Kahwati: Yes. Yeah. And we need as much information as we can to fuel that professional judgment too. When we are trying to navigate things like this, what are some of the common pitfalls that we do come across? I know you’ve mentioned some already, Simon. Is there anything else you’ve noticed that can be a problem in this process?
What are the common pitfalls when predicted student grades?
Simon Child: Yes, there’s a few common pitfalls that I thought about. So the first one is really the idea of not actually being clear as a school or as an individual teacher as to why the predicted grades are actually being created in the first place and what their purpose may be.
So we’ve talked a lot about it being related to some later high stakes assessment and motivational purposes and so on. But defining what that purpose is inefficiently can actually be an issue, because it takes time as a teacher to, say, build up that evidence and that confidence in the predicted grade, and there’s an opportunity cost to that in terms of other teaching that could be done. So that’s something to be aware of.
I’ll say as well, the idea of safeguards and specific processes to try and remove the potential of the biases, for example. So having some kind of standardization or moderation policy or process within a school is a way of guarding against those particular issues. And finally, the idea of class-to-class variability in the evidence used. So again, you could have something at a school level to determine which are the best sources of evidence for a teacher to actually make their judgments on, and which ones are most reliable, perhaps externally sourced, and so on. And that can actually make a much stronger prediction and reduce that class-to-class variability in the quality of the predictions.
Laura Kahwati: So really, in order to help that professional judgment of everybody involved that Kevin talked about, we need a school-wide policy and culture around this so that there’s consistency.
Kevin Ebenezer: Yeah, I agree with that. I think sometimes limited data is also used, so a school may just predict it on one assessment, like a mock exam. And in those cases, that’s going to be a major pitfall because it all depends on the student on that day and their performance.
Laura Kahwati: With all the opportunity that we have for summative assessments throughout the years that a student’s in education, we can’t afford to just rely on one single mock examination.
Kevin Ebenezer: Yeah.
Laura Kahwati: So in terms of creating a clear and effective policy for predicted grades in schools, Kevin, how can schools go about this?
How can schools create a clear and effective policy for predicting grades?
Kevin Ebenezer: Well, I think, again, they have to create that clear and effective policy, and there’s a lot of guidance we have provided on that. But they have to regularly review their processes. For me, that is really important, because what they can do is collect data longitudinally and actually see, “Okay, how are predictions? How are we getting it accurate?” And that for me would be the best way for schools to look at it and reflect on their policy.
Simon Child: Kevin, I agree. And just to add to that a little bit more, that kind of post-hoc or post-prediction analysis that you would do is I think a really important part of the overall policy structure that you have within a school. So even in cases where you would have a predicted grade that was aspirational to the student, somehow trying to motivate them to do as well as they can, if that’s the actual intention of the predicted grades, and then you should see some success in terms of how the predicted grade correlates with the actual performance of the students. And so that’s a really important thing to consider.
But in terms of the things that I would also think about in terms of creating an effective policy, as I mentioned before, the idea of thinking deeply about the opportunity cost for teacher time and developing the data and the assessments themselves for making good predicted grades. It may be in some circumstances that they’re not necessarily required and time could be spent on something else.
So there is sometimes that analysis needs to be done as part of creating the policy. But also, whilst you are making the predicted grades and generating the different sources of evidence and analysing them, the idea of trying to create and maintain a standardized approach within departments and across schools as well.
So in many ways, those approaches may resemble the sorts of things that you may observe in a standardization meeting, the idea of getting groups of teachers together to look at the different sources of evidence and the different grades, and also almost having a kind of inter-teacher reliability element where you look at one teacher making the predicted grade for one student and another teacher making the same predicted grade about the same student, and seeing if there is a correlation of those two, that can give an indication of any potential biases or disagreements that can actually strengthen the overall processes in place in terms of the overall policy for schools.
Laura Kahwati: It sounds to me like the best thing to do, as with many things in education, is to make sure that when you’ve got something in place that you do always come back and re-evaluate it before you go round again. And also, like you say, having a culture of dialogue amongst teachers and amongst staff members, I think not only does that help the pupils who are receiving the predicted grades, but it also helps the teachers with their judgment.
Kevin Ebenezer: I think another thing we have to look at is transparency. So transparency in the policy, and that’s really important for students and for parents. And if schools are not transparent about the policy of predicted grades, there could be problems for the schools down the line. So making your policy well known to students and parents, I think, is really important.
Laura Kahwati: Absolutely, keeping everyone informed along the way and remembering that this isn’t just about teachers and students, but it is also about parents as well and other stakeholders. Simon, where can teachers and school leaders go to find out more information?
Where can you find out more information about creating predicted grades?
Simon Child: Well, last year, Cambridge created a really useful fact sheet on predicted grades, which you can access freely on the Cambridge website. So it’s called Predicted Grades, a Guide for Schools. So you can hopefully find that relatively easily.
But one thing to add is that if you are using some kind of form of assessment that you are generating or creating yourselves, one thing that’s really important is to establish the validity and the reliability of those individual assessments. So if you want to consider developing general and translatable expertise in assessment design, we do offer a range of courses at the Cambridge Assessment Network that can help with this process. And overall, the aims of those kinds of courses is to help people make better predicted grades for their students, and to also give them the best chance of success in their later careers and lives.
Laura Kahwati: And of course, in the assessment processes, we can also remember your book, The What, Why, and How of Assessment, for teachers too. Kevin, where would you say that teachers and school leaders to go to find out more information?
Kevin Ebenezer: Well, our website, as Simon mentioned. I also wrote a blog to accompany that, so that’s on our Cambridge blog pages. And also, I think schools can look at webinars. There’s lots of webinars on how to predict grades.
There’s also guidance available on the UCAS website. That’s really important because UCAS does use predicted grades, and the guidance they give is really valuable. They can also attend workshops, webinars, conferences, and organizations such as the Community for International Schools. IACAC and UCAS actually do conferences about this because predicted grades are really important for universities. So if schools attend this conferences, go to one of those sessions.
Laura Kahwati: That’s really helpful advice. Thank you. And I do remember reading your blog on predicted grades and how interesting it was to consider the international landscape of predictive grades as well as whatever region you happen to be in.
Well, that is all that we’ve got time for today on our predicted grades episode. Thank you to Kevin and Simon for being such fantastic guests and sharing some really useful insights. I hope I can predict that everyone will find it informative and helpful and as enjoyable as I have.
Don’t forget to tell your friends and colleagues about us and rate our show on whatever platform you are listening on. Our show notes have lots of useful links that we’ve discussed throughout the episode, so be sure to take a look at them, and you can also follow and contact us on Twitter or Instagram at CambridgeINT. Thanks for listening. We hope you join us again soon.