What Policymakers and Practitioners Get Wrong About Education Research

Frederick M. Hess
3 min readApr 8, 2022

Last week, drawing on a survey we conducted of the 2022 RHSU Edu-Scholars, I shared some advice they had to offer on how to be an effective researcher. They also had some interesting thoughts for practitioners and policymakers — especially on the question of what they tend to get wrong when applying education research. As someone who has spent many years thinking about the complex relationship between research, practice, and policy, I thought there were several points worth sharing.

A number of respondents noted that policymakers and practitioners frequently overstate the certainty of research findings while giving short shrift to context.

One scholar warned that policymakers and practitioners “fail to appreciate that research is rarely the final word on an issue; findings are situated in place and time,” another that it’s a mistake to assume that even “causal impact studies tell the correct or complete story,” and a third that we ought not regard “the latest high-profile study with the most extreme results as the last word.” On a similar note, another scholar wrote, “Using one or a few studies to make policy will almost inevitably go awry. Context matters, and it’s seldom wise to assume a finding will work as projected across all, or even most, contexts.”

A second major theme was that policymakers and practitioners are rarely equipped to understand the research they are given.

One scholar opined, “They neither understand mathematical reasoning nor statistical reasoning and as a result are easily duped. Even when exposed to empirical evidence, they are unable to reason.” Another argued, “Factors outside of schools account for most of the problems [policymakers] are trying to remedy. When pressed, I believe most education researchers will admit this, but since factors outside of schools are so hard to impact with policy, we mostly ignore them.”

And a third scholar mused, “It is very difficult for practitioners to access information on what works, under what conditions. Often, they hear more from companies and people trying to sell them something and hear a lot less from researchers. It’s less about them getting something wrong as it is a crowded marketplace with no one system for vetting different practices and policies.”

A third thread is the reality that school improvement is often incremental and exhausting, but that policymakers and practitioners have incentives to seek dramatic solutions.

One scholar argued that school improvement is harder than it may appear: “What appear like small effects are really quite impressive. Moving the needle in education is HARD.” Another noted that policymakers fall into the danger of “assuming schools are more powerful than they are and ignoring profound consequences of before-school and out-of-school influences. Often, the policymakers and superintendents make absurd promises, producing inevitable disappointment.”

A third scholar lamented that policymakers and practitioners tend to “focus too much on the bottom-line results, and, when things seem to fail to have effects, they focus too little on the reasons why.” In short, as another put it, “Policymakers let their hope for a relatively simple intervention that will overwhelm the complexities of the problem get the better of their common sense.”

Education research is a tool. It’s only as good as the hand that wields it. Lousy research is a poor tool, but even good research can be destructive if used ineptly. It’s important for researchers to appreciate this, and even more vital for educators and policymakers to do so.

Please note that answers were lightly edited for grammar and spelling.

This post originally appeared on Rick Hess Straight Up.



Frederick M. Hess

Direct Ed Policy Studies at AEI. Teach a bit at Rice, UPenn, Harvard. Author of books like Cage-Busting Leadership and Spinning Wheels. Pen Ed Week's RHSU blog.