Educational ‘innovation’ is only as good as the execution
A few weeks back, I offered a modest suggestion for board members and civic leaders undergoing the process of selecting a new schools chief: Instead of seeking an “innovator” or a “change agent,” focus on how candidates plan to pursue their big ideas. Well, Chad Vignola, executive director of the Literacy Design Collaborative (LDC), recently reached out with what I thought was an informative take on the relationship between innovation and execution, drawing on his experience in New York City and leading LDC. I found Chad’s take usefully concrete, and thought it worth sharing:
I thought your recent article on the problem with seeking an “innovative” schools chief was good advice for selecting school district leaders. More important, it spotlighted the pernicious problem of districts obsessing over educational ideaswith insufficient focus on successful execution.
That’s my experience in a nutshell. I worked closely for three successive capable New York City chancellors (Crew, Levy, and Klein), each with their own uniquely inspiring leadership skills and transformative ideas. None of these superintendents lacked for innovative initiatives, and I worked hard to execute them. But in an educational culture where the results of an initiative often appear far beyond the horizon, too little attention was paid to execution — particularly when bureaucracies could just “wait out” initiatives or happily move on to the next bright and shiny catchphrase.
Successive chancellors sought to implement new teacher-effectiveness mechanisms to address challenges with teacher talent. Such an initiative should have featured regular formative feedback to teachers and day-to-day development of skills, just like with employees in any industry. Instead, the focus became solely on summative end-of-year evaluation and tenure decisions, because “firing incompetent teachers” — though just a tiny fraction of teachers — is politically way sexier than the hard work of day-to-day developing effective teacher skill.
Likewise, the district implemented a new accountability system with summative school grades, but without a sufficient evidentiary basis for many of the metrics. Instead of testing a system that could have used formative early warning data to inspire educators to increase levels of performance, it focused on after-the-fact punitive grades that were too complex to change an educator’s practice.
In contrast, New York City implemented a small schools reform with remarkable success, enlisting all relevant stakeholders (unions, community groups, and parents). The reform led to statistically significant increases in outcomes for tens of thousands of students in hundreds of schools — unlike every other district in the country that implemented small schools. It was daily attention to implementation and testing, regular collection of formative evidence of success, and repeated model iteration — not rigid fidelity — that set New York apart on the small schools. No one would argue that teacher feedback or formative assessment regimens don’t have manifest promise and research validity. Rather, what we had was a failure to implement, and I would argue that education is 90 percent about the implementation, with attention to results: of both success and failure.
In that regard, you identify one powerful mechanism to vastly better this implementation component: Tony Bryk’s improvement research. Dr. Bryk seeks to help us be better at the doing of education — rather than merely being another among the legions of education idea pontificators. This is great conceptual and practical work, but I’d offer one caution from someone in the trenches: Tony’s powerful ideas can suffer the same challenges you identify in educational initiatives: poor execution.
Just like bright and shiny education reform ideas, there can be a bright and shiny process ideas. Improvement processes are challenging, detailed work which depend on effective execution, not talismanic invocation of jargon.
For example, LDC was spun off from the Gates Foundation’s primary Common Core strategy to implement college-ready standards across the country. We received a U.S. Department of Education i3 Validation grant due to the rigorous research underpinning our educator learning model. We knew early on that to address quality and scale challenges, we would have to implement a standardized, flipped, hybrid teacher-learning model — a Khan Academy for adults to learn how to implement rigorous standards-driven instruction in every classroom for students of all needs. We thus launched our online learning platform developed through two-week, user-centered design and agile-development cycles that should have made Dr. Bryk proud.
But we quickly learned in our first year of implementation is that, because we had used “rockstar” teachers to help us design our platform, it worked great for them — but was often too challenging for many other teachers. Worse, most professional development coaches that would use our platform and resources were comfortable with their usual one-off workshop trainings — not a year-long, job-embedded experience, which often showed that the coaches themselves were not skilled enough to be asked to train our teachers.
In short, merely invoking the idea of “user-centered design” to launch “flipped PD for teachers” was not enough if it failed in execution to meet its goal: building the skill of all teachers to support all students. Dr. Bryk’s ideas will only be transformational if they are realized in the nitty gritty hard work of implementation, evidence collection, and learning from both failure and success.
You know, what I love about this take is that it makes clear that execution is the key to meaningful implementation. An undue focus on how things will actually work is often treated with eye rolls by those who see themselves as visionaries. But what Chad accurately notes is that this focus on the boring stuff is what determines whether “innovative,” “ambitious” ideas deliver. That’s as true in new ventures as it is in bureaucratic districts. And “improvement science” can help with all this, but it’s not shortcut. As with everything else, what matters is how it’s used. That’s a bit of wisdom we’d all do well to remember.
This post originally appeared on Rick Hess Straight Up.