Skip to main content
Blog

Educational technology gets a new level of scrutiny

By June 9, 2020No Comments

In an era when we all have access to copious information to help us make even the most trivial buying decisions (as I write, there are tens of thousands of reviews of clothes hangers on Amazon—I mean, come on, hangers), it’s odd that good data can be hard to find when the stakes are really high. When it comes to buying educational technology for classroom or administrative use, the federal Every Student Succeeds Act (ESSA) is aimed at changing this by insisting that companies provide evidence that their products or interventions are effective. Sharing a few positive reviews on a website isn’t enough.

With taxpayer dollars at stake—not to mention the possibility of setting students back with an ineffective intervention—a little skepticism on the part of principals, superintendents, and other district buyers is justified. That’s why ESSA, signed into effect in 2015, established four “tiers of evidence” that schools need to look for before they use federal funding to buy an intervention program or product. Now, if an ed-tech maker wants to claim that their intervention is effective, and if they expect federal dollars to help foot the bill, they have to satisfy one of ESSA’s requirements:

To be able to make these claims, ed-tech companies need to have their products evaluated. So, what’s an evaluation, exactly? There are several kinds, and they can vary greatly in the amount of time and money they require, so an ed-tech maker should choose wisely by discussing options with an expert.

For tier 1 evidence, the “gold standard” of research, which you may be familiar with from pharmaceutical trials, is a randomized controlled trial (RCT). Researchers and evaluators like me love RCTs because they’re so sciency: Participants are randomly assigned into treatment and control groups, the study conditions are tightly controlled and monitored, lots of data is collected, and by the end of the study we can say with a great deal of confidence that the intervention did or didn’t cause the intended outcome. But this can take a long time and cost a lot of money. And sometimes a rigorous efficacy study isn’t the best first step.

An efficacy study may be premature if the ed-tech company doesn’t have a clear understanding of how clients are actually using their product. Suppose a company has developed an online math program for middle schoolers, and some schools have bought the program for their classrooms. Before jumping right into an efficacy study, the first question to ask is, Are teachers and students using the program as intended? If it’s unclear, then the right place to start is an implementation study. This may save the client money in the long run by steering clear of an efficacy study that shows no results.

An experienced, trustworthy evaluator can offer ed-tech companies a comprehensive plan for their product. This may start with creating a logic model for the product. Together we can decide what those outcomes are. Then we can work together to determine how we can measure short and mid-term outcomes (or leading indicators) that will give us confidence that changes in the long-term outcome are probable. Are there mid-course corrections that can be made to improve use and implementation of the product . . . which would therefore improve the chances that the long-term outcomes will be achieved?

The aim for an ed-tech developer should not necessarily be to commission a study that meets the highest standard of evidence, but rather the one that meets their needs. The important thing is that, whichever type of claim a developer wants to make, school districts will be looking to ensure it meets ESSA requirements.

District officials have high hopes for their ed-tech purchases. And every ed-tech company I’ve met really does want their products to help students have better educational experiences. Evaluators play a key role in providing sound, research-based information to both parties: the educators and the developers. Ultimately, we all want the same things: better tools for schools that lead to better outcomes for students, which of course are even more important than cool hangers for tidy closets.

Faith Connolly is a research director at McREL International and a part of our educational technology evaluation team. Learn more about McREL ed-tech evaluations.

McREL.org

McREL is a non-profit, non-partisan education research and development organization that since 1966 has turned knowledge about what works in education into practical, effective guidance and training for teachers and education leaders across the U.S. and around the world.