A Chinese principal friend recently wrote me:
We tried Project Based Learning (PBL) in our school. I know the research says it is a good thing. But when we did it, we failed. Students, teachers, and parents were upset. We went back to the old way.
My view: This is Example #1 billion of 3 key challenges for school leaders.
First, almost every education idea can work or fail, it’s all in the implementation.
Second, implementation of new stuff has large time costs. These costs get discussed by, say, corporations installing new software. Schools, in my experience, are terrible at modeling these costs. I know in much of my career I’ve been worse than terrible, underestimating the “true” time cost by a factor of 100.
Third, “education research” is unreliable, and rarely replicated. Even randomized control trials are often done by advocates, so they put their thumb on the scale.
To this last point, I looked at a recent study of PBL.
I appreciate all the work it takes up to create an RCT like this, even if just 48 teachers. The amount of permissions the professors need to secure is daunting! So much paperwork, just to try a small experiment.
Here’s what I think they did.
Group A teachers:
Agreed to teach 80 lessons from a specific PBL curriculum, created by the professors if I understand correctly.
66 lessons were actually taught.
There were ~11 visits to each teacher’s classroom by coaches. These coaches provide guidance on how to teach this particular curriculum better.
Group B teachers:
Agreed to teach 80 lessons about the same social studies topics, using traditional pedagogy.
51 lessons were actually taught.
There were 0 coaching visits.
The authors give an 11-question test at the end.
They conclude that PBL is much better.
1. They discuss the 66 to 51 difference, but argue it’s too small of a difference to drive such a large (0.48) effect size.
2. They didn’t seem to acknowledge the coaching dosage difference, though perhaps I misunderstood that. Actually this would be missing 2 different effects. The Hawthorne Effect (just knowing that people are “watching you” changes your behavior), plus whatever value the coaching itself may have brought. This seems like a big potential driver.
3. They also acknowledge that they wrote the test themselves, and that might contribute to the difference. Ya think?
They don’t show their 11-question test, but describe one question as providing a map, then asking kids to explain how they’d get from Point A to Point B.
But it seems (this is not clear)…while the control group teachers used maps in their 51 lessons, the PBL kids got a 20-lesson unit built around the concept of maps! The potential dosage difference here seems huge.
I would love to see an attempted replication of the experiment that solves for #2 (take away all the coaching, which is the typical “new curriculum” approach) and #3 (get a neutral person to come up with the tests).
I eat my own cooking, too, in this way: RCTs have found gigantic gains associated with Match tutoring, which I helped to develop, and yet I am cautious in promoting high-dosage tutoring. I realize the examples of Match (Boston, Houston) and Saga (Chicago, Lawrence) involve unusually good leaders, systems, and actual tutors…in a way that is unlikely to replicate if someone else just reads the study and tries it themselves (like the Chinese principal with PBL). The very practice I believe in most strongly, with giant effect sizes validated by RCTs…I will acknowledge is quite hard to replicate.