RAISEonline: Closing the Gaps and the Introduction of Shading

2014 RAISEonline has introduced changes to the presentation of data with regard to the performance of the group of children for whom the pupil premium provides support. This group, previously known as FSM/CLA, are now referred to as ‘disadvantaged’. The group of pupils who are not disadvantaged are described as ‘other pupils’. Other changes include:

  • All reports are now three year trends

Three year trends are provided for expected progress, value added, average point score (APS) and threshold measures

  • Greater detail in pupil progress tables
  • New approach to shading

There is a new approach to shading in the 2014 Closing the Gaps reports which highlights ‘noteworthy numbers’. Ofsted explain that shading is used to indicate ‘educationally important differences’ between the progress of the school’s disadvantaged group and the progress of other pupils nationally from the same starting point.

An explanation as to how shading has been applied is provided in the document Guide to the Closing the Gaps section – 2014 onwards which is available in the RAISEonline Document Library https://www.raiseonline.org/OpenDocument.aspx?document=298

How useful is the new approach to shading?

The latest School inspection handbook (September 2014) indicates Ofsted’s continued focus on disadvantaged pupils.

‘Inspectors must take particular account of the progress made by disadvantaged pupils compared with that made nationally by other pupils with similar starting points, and the extent to which any gaps in this progress, and consequently in attainment, are closing’.

How useful is the new approach likely to be in helping Ofsted inspectors make valid judgements about the achievement of disadvantaged pupils? I feel that it is not at all useful and has the potential to create unjustifiably negative impressions or seriously mislead. There is a danger that the new red and yellow shading might imply the numbers have a statistical significance (similar to the continuing green and blue shading). Of course even when shading does highlight statistical significance it does not follow that this implies any educational significance.

I can immediately see three reasons why shading is not appropriate or useful and the assertion that the new shading highlights ‘noteworthy numbers’ and indicates ‘educationally important differences’ is not justified:

  1. There are too few disadvantaged pupils in individual primary schools
  2. The range and spread of some data sets is too narrow
  3. The data sets used for shading are less precise than they need be

Each of these reasons are explained in detail below.

  1. There are too few disadvantaged pupils in individual primary schools

Given the generally small numbers of disadvantaged pupils in a primary school Year 2 or Year 6 cohort Ofsted’s claim that the use of shading draws attention to ‘noteworthy numbers’ and ‘educationally important differences’ is highly questionable.

2014 RAISEonline reports the national averages for maintained mainstream Primary Schools of 263 pupils on roll and 26.6% FSM. Thus, roughly speaking, an average of 9.99 FSM pupils per school. Adding CLA pupils to this number it is reasonable to assume approximately 10 disadvantaged pupils in an average sized school. Of course, it is the nature of these average measures (the arithmetic mean) that there may not be a single real school that is ‘average’. Many schools will have more than 10 disadvantaged pupils in a cohort, many will have less. Using an average of 10 pupils in a cohort, the group size for any of the four KS1 starting points is unlikely to be more than a maximum of 5 or 6 per group, even assuming starting point for the majority is L2.

However, similar assumptions about the average size of cohorts may have been made when Ofsted devised their worked example of shading, as contained in their Guide to Closing the Gaps reproduced below. This is illustrated in the Ofsted example below where the group sizes are 0, 3, 4 and 1 for starting points of W, L1, L2 and L3 respectively.

Figure 2Personally I prefer to categorise one pupil as an individual, not a cohort or a group.

Many schools perceive the Ofsted inspection process as a ‘deficit’ model. Inspection teams all too often predominantly focussing on areas of relative weakness and not giving sufficient emphasis to areas of strength. They come to find fault rather than identify and celebrate success.

The extract from the Ofsted guidance above does nothing to counter this perception. Of the 12 differences illustrated in the example 11 are negative. This immediately creates an adverse impression and reinforces the perception of focussing on deficit. Most schools will have similar small numbers of pupils which do not constitute significant groups and thus any extrapolation or generalisations would be extremely unreliable.

  1. The range and spread of some data sets is too narrow

This is closely linked to the small cohort numbers issue above and particularly applies to the use of shading for expected progress (EP). 2014 National average EP outcomes for ‘other pupils’ with L2 starting points are 93%, 95% and 96%  in maths reading and writing, respectively.

In the Ofsted example above there is a cohort of 3 pupils with L1 prior attainment in mathematics. The national average for ‘other pupils’ in this group is 84% for EP and 42% for more than expected progress (MEP). 67% (2 out of 3) pupils made EP and 33% (1 out of 3) made MEP. Thus the school has reported differences to national averages for ‘other pupils’ of -17% for EP and -9% for MEP. Neither of these percentages meet the criteria for shading and therefore not regarded as ‘noteworthy numbers’ by Ofsted.

This is good as both the -17% EP and -9% MEP differences are meaningless. The only possible four outcomes for 3 pupils are 0%, 33%, 67% or 100% and thus the only possible differences for EP are -84%, -51%, -17% or 0%. The only way a negative difference could be avoided would be if all 3 pupils achieved the standard. -84% and -51% would both meet the criteria for red shading, -17% does not meet the criteria for either red or yellow shading, and 0% would meet the criteria for yellow shading. Thus with 3 pupils there are two ways to get a red shading, and only one way each for yellow or no shading. Although the Ofsted example does not use real school level data the same principles apply to any real set of data for 3 pupils and also apply in the same way to both reading and writing outcomes.

Similarly there is a ‘cohort’ of 1 pupil with L3 prior attainment. This pupil made EP but not MEP and differences of +8% and -37% are reported. As EP is 100% it meets the criteria for being a ‘noteworthy number’ and is therefore shaded yellow. This number is noteworthy only in its profound meaninglessness.

As outlined above, with 3 pupils in a disadvantaged cohort there are two ways to get a red shading, and only one way each for yellow or no shading. This is the same for all three subjects. But it is not just cohorts of 3 where the number of ways of getting a red shading greatly outnumber those for yellow or no shading. Even with 10 pupils in the L2 prior attainment cohort there is still only possible to get a yellow shading if all 10 pupils make EP. There is one possible outcome which would get no shading and 8 outcomes that would meet the criteria for red as the table below illustrates.

Table

The same patterns also apply to reading and writing data and are even more pronounced given that EP outcomes for L2 starting points are higher than in maths (95% and 96% respectively). Indeed in writing a school would need a cohort of 23 L2 disadvantaged pupils before any outcome less than 100% would be shaded yellow. In this case two outcomes 96% (22 out of 23) and 100% (all 23) would meet the criteria for yellow and all other possible outcomes would be shaded red.

As an aside it is interesting that the whole cohort in the Ofsted example is only 25 pupils and the percentage of disadvantaged pupils in this fictional cohort is therefore above the national average at 32%. I also like the note ‘All school level data are fictional’. Ofsted should adopt this as their guiding principle in inspections.

  1. The data used for shading is less sophisticated than it need be

In the Ofsted example above the school level data may be fictional but the national expected progress percentages for ‘other’ pupils of 51, 84, 93 and 92 for starting points of W, L1, L2 and L3 respectively, are the actual 2014 data for mathematics.

Figure 2 extract

This extract from the example shows that 75% (3 of the 4) of pupils with a starting point of L2 made expected progress. This is then reported as 18% below the national average for other pupils nationally. However, as there is a breakdown of L2 into sub-levels elsewhere in RAISEonline it is difficult to understand why this more precise data was not used for the shading exercise. It could potentially makes a substantial difference to the presentation of a school’s performance.

The sub-level breakdown illustrates the great difference in progress made by pupils nationally depending on their precise starting point. 99% of children nationally who started at L2A made expected progress in 2014 but only 76% who started at L2C. Thus, in the example above, had the 3 pupils who made expected progress started at L2C the 75% could have been compared with the outcome for the same group nationally and a difference of only -1% reported.

Conclusion

The use of yellow and red shading is neither appropriate nor useful. Any possible benefits will be outweighed by the danger that casual or uninformed reading of the report will lead to unjustified conclusions and judgements. Inspection teams looking to make a judgement about how effectively a school provides for disadvantaged pupils, or how well disadvantaged pupils achieve, will inevitably need to look at much more sophisticated evidence and analysis than the new reports provide. Schools will have this evidence and insight. They will know each of their pupils individually and be able to tell the story of each pupil’s performance with insight into its full complexity. They will know how other variables such as gender, ethnicity, first language and special educational needs impinge on the performance of disadvantaged pupils. They will also know about the performance of other pupils in their school who are economically disadvantaged, but not eligible for pupil premium because they are not registered as eligible for free school meals, and thus not included in the disadvantaged cohorts reported on by RAISEonline. In this context the introduction of shading is at best a distraction to meaningful analysis and appropriate and valid judgements.

Advertisements
This entry was posted in Primary School Data and tagged , . Bookmark the permalink.

4 Responses to RAISEonline: Closing the Gaps and the Introduction of Shading

  1. Hi. Great stuff! Thanks for doing this. I was thinking of doing the same thing but you’ve done a more thorough job than I would have done. One suggestion: seems obvious but in your worked example above, could you state that the comparator for ‘other pupils’ is 84%. Perhaps just put it in brackets somewhere. I know it seems obvious (and it’s in the table) but it would help clarify the example.

    Thanks again for doing this. I’ll retweet.

  2. Pingback: RAISE in your allowance | Education_Researcher

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s