I’m not a big fan of the Education Endowment Fund, partly because they have allowed pretty poor research in the past, partly because they have been biased on certain issues in the past. However, they fund RCTs that test certain educational initiatives and that should at least allow them to identify some popular initiatives that have no or even negative effects.
The latest emperor without clothes is Achievement For All, which is loud the EDF website
… Is an overall school improvement program that aims to improve the academic and social outcomes of elementary school students. Trained performance for all coaches offers schools a bespoke two-year program through monthly coaching sessions that focus on leadership, learning, parental engagement, and broader outcomes, and focus on improving outcomes for a target audience of children (mostly from the Target exists) lowest 20% of participants). The program has reached a total of over 4,000 English schools.
Your evaluation of the program found that:
In this study, Achievement for All had a negative impact on the academic outcomes of students who received the program during the five semesters of grades 5 and 6 (ages 9-11). Children in treatment schools made 2 months less progress in reading and arithmetic in key phase 2 than children in control schools, where normal practice continued. The same negative effects were found in children who are entitled to free school meals. Target children (those children who should receive special support from the intervention) also made 2 months less progress in reading and 3 months less progress in mathematics. The result of the co-primary result (reading in the whole group and reading target children) had a very high safety rating, 5 out of 5 on the EDF padlock scale.
Given the magnitude of the effects and the consistency of negative results, these results are remarkable. Of particular importance is the impact of the program on target children and children who are entitled to free school meals.
A report in the school week contained some details.
The results rank AfA as the worst performance of more than 100 projects reviewed by the EDF since 2011, with only three other projects achieving an impact assessment of two negative months.
Of these, it is the only one that has the highest possible level of evidence of five – indicating that the EDF “has very high confidence in its results”.
They also reported on the ridiculous reaction of the founder of the AfA, Professor Sonia Blandford:
Blandford pointed out that disadvantaged students in the AfA pilot schools still “exceeded national expectations, which was our main goal in the intervention”.
She added: “It was a mistake to agree to a study that attempted to assess the effectiveness of our broad yet tailored approach through the close lens of two parameters for school improvement.”
Is that important? I guess so. Since it started in 2011, it’s quite possible that 4,000 schools have affected their students’ learning or at least wasted resources on something that is more harmful than helpful. And it’s worth asking how. Probably the biggest reason why this disaster lasted for so long is that the DfE approved it a report Assessment of positive effects on SEN children using data collected by:
- Teacher surveys
- academic sources
- Interviews with strategic people
- Longitudinal case studies of 20 AfA schools
- Mini case studies of 100 students and their families
- AfA Events
In other words, the type of “research” that costs money but that no one can reasonably believe is a fair way to evaluate an initiative of this kind. The first thing we can learn from this is that DfE should not support projects in this way. Especially when the chances are, some teachers, like Ms. S. below, could have given a more precise assessment.
Another point is the extent to which the people who run such organizations become a legitimate interest and eagerly tell politicians and the public that schools are doing something wrong. There is a great deal of expertise in the system among teachers and school leaders. However, it is astonishing how often AfA professor Blandford was a voice in important debates. I particularly noticed that people whose position means that they do not have to deal with the consequences of dangerous and out of control schools seem to dominate the debate about exclusions. Professor Blandford was a particularly loud voice on the subject:
Each of these would have been used much better than an opportunity for a successful school principal to explain why exclusions are necessary. However, our entire education system promotes the voices of “experts” whose ideas do not go beyond the voices of practitioners with a proven track record.
And I will never forget, as I reported here, that in the days when the Chartered College of Teaching pretended to be led by teachers, Professor Blandford was one of the first non-teachers to be given a leadership role when promises were kept would have gone to a teacher.
I have always defended the right of non-teachers to help and advise schools, but we need a system where schools first pay attention to a) practitioners’ expertise and b) what has proven to be effective. Not a system in which we only realize after 4000 schools and 9 years that we have listened to the wrong people.
Note: We are not the author of this content. For the Authentic and complete version,
Check its Original Source