In this blog we’ll outline how GCSE and A Level grading will work this year. We’ll discuss some potential pitfalls as well as benefits of this approach, and outline what data analysis schools, colleges, local authorities, and academy trusts may find valuable. This year, in spite of the lack of formal accountability measures, having access to understandable and accurate data will be as important as ever.
How will grades be calculated?
Simply put, grades will be calculated using judgements by teachers rather than exams (full Ofqual guidance here). Schools and colleges are being asked to judge, based on all available evidence, what grade each pupil would have achieved in each subject if they had been able to sit exams and submit all coursework.
Teachers are then asked to work with colleagues to rank all pupils within each grade in each subject. For example, if 15 pupils in one school are graded to get a 5 in History, those 15 should be ranked from highest to lowest. These assessments will then be submitted to exam boards, with a deadline no earlier than May 29th.
Exam boards, using a model currently being consulted on by Ofqual, will moderate grades to ensure consistency across schools, and fairness compared to previous years. Although this model is yet to be defined, it is likely to take into account expected national outcomes, pupil prior attainment, and historic performance of the school/college (although Ofqual’s own analysis shows that a centre’s prior year performance can be a poor predictor of results).
While the numbers at each grade per subject at a school or college may be moderated up or down, the rank order of pupils will not be changed. The ranks of pupils within each subject made by schools and colleges will be final. Neither exam boards, nor Ofqual will adjust at these rankings.
GCSE and A Level results days will then go ahead as planned on Thursday 20 August and Thursday 13 August respectively.
Benefits and drawbacks of teacher assessments
Just as exams can be an imperfect judge of a pupil, so can these assessments. However, while this system has been brought in because it is not possible for pupils to sit exams, there are reasons to think that teacher assessments may be a more reliable judge of a pupil’s level of performance than a one-off exam. As laid out by Ofqual’s guidance, these assessments will be based on the full range of ‘available evidence’, including bookwork, classwork, and mock exams. In fact, Ofqual’s literature review found some evidence that teachers’ estimates:
‘have potentially greater validity than formal tests’.
Predicting a pupil’s grade from such a range of evidence gathered over a long period of time may, for many, seem fairer than the results of a highly pressured one-off exam.
On the other hand, Ofqual’s review stresses that:
‘there is also a range of evidence that highlights issues of low reliability and potential bias in teacher assessments (reviewed in Harlen, 2005) relating to a range of student characteristics, including gender and special educational needs as well as ethnicity and age’
Concerns have been raised about the unconscious human bias involved in making teacher assessments. There may be some evidence that, at a cohort level, some groups of pupils could be assessed more harshly, or indeed more favourably. Specifically, there is concern that unconscious biases and stereotypes relating to gender, ethnicity, disadvantage and SEN status may influence assessments. Others have worried about how assessments will vary between state-funded and independent schools.
We believe that such concerns should act as an impetus for schools, colleges, local authorities, and academy trusts to use data analysis to ensure teacher assessments are reliable and robust. We know that using data to support decision making can help to remove such biases.
The role of data in making assessments
It is crucial that these teacher assessments are as fair and accurate as possible. As discussed, moderation will not change the rankings of pupils determined by the teacher assessments. Therefore, while moderation will try to ensure fairness at a school/college level and between regions/LAs, it will do nothing to ensure fairness and reliability of each school’s or college’s internal ranking of pupils. Getting this right is clearly important to the pupils themselves and is also vital for schools and colleges to ensure accurate benchmarking and reliable evidence with which to make school/college improvement decisions.
Schools, colleges, local authorities and academy trusts should use data and modelling to:
- Understand how their teacher assessments compare to prior year results, including their implications for headline measures such as Attainment 8 when aggregated to the whole school level
- Allow scrutiny of assessments that are very different to grades expected based on prior attainment and school’s historic performance in each subject
- Analyse teacher assessments by pupil group (e.g. disadvantaged status, ethnicity and gender) to explore potential unconscious biases
How can data help you produce, scrutinise and understand your assessments?
As ever, we’ll be working with our clients to provide them with detailed analysis of Key Stage 4 and Key Stage 5 attainment and progress. We have developed teacher assessment moderation and analysis tools for GCSEs and A levels to help schools with their assessments, and then to analyse the effect of these assessments on aggregate attainment and progress scores. This tool facilitates teacher assessments moderations by flagging up how assessments differ from:
- the grades expected based on the pupil’s prior attainment
- the grades expected when allowing for the school’s historical progress scores
- the grades expected based on pupils’ scores in recent internal testing
- the school/college prior year performance.
Our moderation analysis report then aggregates assessments up to provide in depth pupil group and subject analysis. The report allows schools and colleges to explore how scores for different groups of pupils or for each subject are different from prior year and/or from expected scores based on pupils’ prior attainment.
Alongside this, we will provide our on the day results service to help schools, colleges, local authorities and academy trusts understand and benchmark their results as soon as the final grades are issued. This will add valuable context by showing how awarding body moderation has affected their grades compared to other schools.
Further down the line, as data is shared by schools and the DfE we’ll be exploring how best to analyse and use this new data, alongside previous and future year’s data. We’ll work to identify trends and differences to previous years that might be explained by this year’s system of grading. This will help our clients understand how best to interpret this year’s data when making decisions now, and when looking back at trends.
The changes to the GCSE and A Level grading system this year will, in some ways, increase the need for schools and colleges to have reliable and accurate data. We will be working with schools, colleges, local authorities and academy trusts to provide them with this support. We have already developed tools to help schools and colleges moderate assessments and we will provide analysis to help them understand what these assessments mean at a cohort and subject level.
At the local authority and national level, Key Stage 4 and Key Stage 5 attainment and progress data published by the DfE this year will need to be well understood in the context of the grading system. Analysis of this data will therefore be hugely important.
We look forward to working together with our clients and partners to support accurate and fair teacher assessments, and enable robust decision making based on a clear understanding of this year’s attainment and progress data.
Click here for a full summary of the secondary and post 16 data analysis services we will be providing this summer.
If you are interested in discussing any of this analysis with us or have any other ideas about how we can support you then please do get in touch.