When you’ve been around the school choice debate as long as I have, you can’t avoid the claims that the media plays favorites. Given my own sense that media coverage of schooling tilts to the faddish, progressive, and sensational, I find such concerns easy to credit. But, especially with the attention showered upon all things school choice in the Trump-DeVos era, it seems a question worth looking into.
So, a few weeks back, two of my terrific research assistants (Brendan Bell and RJ Martin) and I decided to take a look at this in Real Clear Education. We identified five pairs of studies that were largely identical — aside from the results. Both studies in each pair measured the same outcomes, used the same basic methodology, and were either conducted by the same researchers or else conducted in the same locale. The only meaningful difference in each pair was the result.
Before we get to that, here’s a quick sketch of the five pairs of studies:
- In 2009, Stanford University’s CREDO used longitudinal student-level data to analyze charter schools nationwide and found results that leaned negative when it came to reading and math. In 2013, they repeated the study and found much more positive results.
- In 2008, Patrick Wolf and several colleagues evaluated Washington DC’s Opportunity Scholarship Program. At the two year mark, they found no significant achievement effects for students who were offered vouchers, but their third-year results found voucher recipients making outsized reading gains.
- Northwestern’s David Figlio, in 2014, examined Florida’s private school choice scholarship program and found no significant impact on reading or math. In 2016, he analyzed Ohio’s voucher program, and found a negative impact on reading and math performance for voucher recipients.
- In 2016, Atila Abdulkadiroglu and colleagues studied the Louisiana Scholarship Program one year after implementation, finding that voucher recipients had lower academic achievement. In 2017, Patrick Wolf and Jonathan Mills used the same methodology to re-examine the program at the three-year mark, and found that the earlier losses had essentially vanished.
- Finally, in 2015, Carnegie Mellon’s Dennis Epple and colleagues conducted an international literature review and found that vouchers had no consistent effects on student achievement. In 2016, Danish Shakeel and colleagues analyzed 19 international randomized controlled trials of school vouchers, finding that vouchers tended to raise both reading and math scores.
We tallied the number of major media news stories and editorials in leading outlets that mentioned each of the studies. We examined coverage in the New York Times, Wall Street Journal, USA Today, Washington Post, Los Angeles Times, and The Economist.
The bottom line: News stories in these outlets played no favorites when covering school choice research. Thirteen news stories cited a “positive” study and 15 referenced a “negative” study. While some studies received much more attention than others, coverage was relatively balanced within each pair and across the various outlets.
So, the major media appears to have done an admirably impartial job when reporting on rigorous research in the contested field of school choice.
While the news coverage didn’t play favorites when it came to research, the editorial pages were another story. Editorials and op-eds cited “negative” school choice studies twice as often as they did “positive” studies, with 36 mentions of “negative” studies compared to just 18 of “positive” studies. The 2-to-1 ratio of negative-to-positive was consistent across newspaper editorials and the op-eds they published.
Given this lean, and the interesting fact that the editorial sections of these papers were about twice as likely to cite school choice research in their opinion pages as in news stories (54 mentions to 28), it’s easy to see why school choice advocates might regard the mainstream media as anything but impartial.
Look, editorial sections have every right to take whatever position they will. And news reporters don’t control the editorial voices. That said, when the editorial pages of the nation’s leading newspapers show a collective tilt in how they treat research — and when they talk more about the research than the news pages do — a simple query as to the existence of “media bias” gets more complicated than one might expect.
This post originally appeared on Rick Hess Straight Up.