Do Educational Apps Actually Help Kids Learn?
The last few years have made clear that education may be increasingly infused with technology but that much ed. tech is disappointing and distracting. Heck, I spent a whole chapter in The Great School Rethink on this very point and on what it’ll take to do better. Well, that’s what makes a new meta-analysis by Jimmy Kim and Josh Gilbert so intriguing. Kim, a professor at the Harvard Graduate School of Education, and Gilbert, a Ph.D. student at Harvard, dug deep into the research literature to figure out what we know about when and why apps actually help students learn. I think it’s fascinating stuff and asked if they’d mind sharing some of the takeaways. They were good enough to oblige. Here’s what they had to say.
— Rick
Can educational apps really help children learn? It’s a question that has become especially relevant since the COVID-19 pandemic, when millions of students were forced by school closures into online learning. With the abundance of educational apps available, parents and teachers are wondering whether they are truly effective in improving student learning outcomes such as math or reading test scores.
Researchers often conduct randomized controlled trials, RCTs, to test whether an educational app is effective in promoting student learning. In these studies, a group of students is randomly divided into two groups: one that practices their skills with the educational app and another that does not. Individual RCTs provide valuable insights, but they often involve small, homogeneous samples — which makes it challenging to draw broader conclusions about the overall effectiveness of educational apps.
This is where “meta-analysis” comes in. Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of the overall research landscape. In meta-analysis, the data points are not the test scores of individual students, as in a single study, but rather the results of many studies. This lets us study the consistency of results and when or why they vary.
Our meta-analysis sampled 36 studies focusing on educational apps designed to improve math and reading skills in children aged 3 to 9. The findings bring good news: The overall effect of these apps is positive. But before parents and teachers rejoice and hand over tablets to their kids for the entire day, it’s important to note that the average positive effect hides significant variation in app effectiveness. Our study revealed that the effects of the apps ranged from slightly negative to hugely positive. With such a wide range of outcomes, it’s crucial to explore the characteristics of the studies and apps that may explain these differences.
We identified three key variables that accounted for the variation in app effectiveness. First, the effects were larger in studies that used tests specifically designed to measure the skills targeted by the app, rather than relying on externally developed standardized tests. Second, apps focusing on “constrained” skills, such as letter names or counting, showed larger effects compared with those targeting “unconstrained” skills like vocabulary or arithmetic. Third, the effects were more pronounced in preschool-aged children compared with those in kindergarten to 3rd grade. Interestingly, our findings also highlight what does not explain the differences in app effectiveness. The “dosage” of the app study — including the number of sessions, time spent per session, and duration of the study — did not predict variations in effectiveness. Additionally, there were no significant differences between apps aimed at improving literacy outcomes versus math outcomes.
So, what does this all mean for parents and teachers? Before they make any drastic decisions based on the latest study suggesting positive effects of an educational app, it’s crucial to consider critical questions about how the study was conducted. These questions will help evaluate the study’s findings and their relevance to individual situations.
First, it’s important to determine whether the study measured success using a standardized test or a researcher-developed test. Standardized tests are widely recognized and provide a clear benchmark for comparison, while researcher-developed tests may be more tailored to the specific skills targeted by the app. In general, studies that use standardized tests are more likely to be generalizable to other contexts than those that use researcher-developed tests.
Second, it’s essential to assess whether the study measured a constrained or unconstrained skill. Constrained skills refer to specific abilities like letter names or counting, which are more easily quantifiable and targeted by educational apps. Unconstrained skills, on the other hand, encompass broader abilities such as vocabulary or arithmetic, which may be more challenging to improve solely through app usage. Knowing which types of skills were evaluated will help determine the potential benefits of the app in question.
Finally, it’s valuable to consider whether the study focused on preschool-aged children or those in kindergarten to 3rd grade. Children at different developmental stages may respond differently to educational apps, and age-specific factors can influence their learning outcomes. Understanding the age group involved in the study will assist in gauging the relevance of the findings to the specific age range of interest.
Parents, teachers, and researchers should remember that the measures used in a study matter significantly. By critically evaluating the methodology and considering these important factors, we can make more informed decisions regarding the use of educational apps for children. The essential question isn’t simply whether apps work but rather, for whom and under what conditions they are effective.
This post originally appeared on Rick Hess Straight Up.