The Education Technology Industry Network (ETIN) of SIIA has updated the research guidelines they first published in 2011. New ESSA legislation includes specific definitions of what counts for evidence of impact in product efficacy research. These new guidelines focus specifically on the K-12 market and will impact any effectiveness research you conduct and present in support of new technology products.
These guidelines define “product” broadly as any kind of software or network-based instructional or infrastructure product or service provided to K-12 students, teachers, schools, or education agencies. In addition to detailing the four levels of acceptable research, the guidelines focus on how education companies can show the impact on desired outcomes. This includes not just the causal connection between the product and outcomes, but also ways in which the research can demonstrate that a product has potential for positive outcomes.
“The target audience for this document are the company decision makers…the managers responsible for development, evaluation, and marketing of these products.”
Traditional research norms that include randomized trials and publication in peer-review journals are not useful in edtech as product lifecycles can be shorter than the span of time to plan, deploy, interpret, and report a research study. In particular the prevalence of mobile devices in schools as well as the move to the cloud have facilitated frequent software updates that have accelerated development cycles.
New ESSA Evidence Standards
ESSA more clearly tied categories of federal funding to evidence. The type of research that ESSA focuses on is evaluating the impact or the effectiveness or efficacy of the edtech product on educational outcomes. It might be useful to know the percentage of the market impacted by these new research requirements. “61% of software industry revenue came from enterprise management and instructional support, such as student information systems, curriculum management systems professional development programs, assessment applications, data warehousing systems, and information productivity applications.”
While the terms “efficacy” and “effectiveness” are often used interchangeably, there is a distinction. Efficacy studies show how a product can work under ideal conditions. Effectiveness studies test it on a larger scale in regular field conditions.
There are now four specific levels of evidence outlined in ESSA with entry level evidence being “4” and the strongest evidence being “1.” Here is a short explanation of each in rank order of importance:
Strong Evidence—Level 1 research requires at least one experimental, randomized study between those receiving the program and the control group.
Moderate Evidence—Level 2 calls for at least one “quasi-experimental study” where one group that has used the product is compared to a group that has not, and where both groups have similar characteristics.
Promising Evidence—Level 3 requires a “correlational study with controls for selection bias.” Relative to what has been required by the What Works Clearinghouse (WWC), this was previously unacceptable. “Promising” means that while there may be weak evidence of impact, the study indicates that the product is worth exploring further for impact.
Entry Level—Level 4 provides justification for trying or evaluating a product for evidence of impact. A provider with no research evidence can begin here to provide a rationale based on research that shows strategies similar to the program being considered are likely to show positive relevant outcomes. There is a requirement to demonstrate “ongoing efforts” to study a program which allows schools and districts to participate in an ongoing program of research before any correlational research findings are available.
The most common way of sharing research is to publish a report and post it on the provider’s website. Using an independent researcher will strengthen the credibility of the research. SIIA recommends that the report be provided for free download to ensure it is seen by as many as possible and to build educators’ trust in the results.
The Guidelines for Reporting Edtech Research is a detailed blueprint for the kind of research ESSA requires and how to develop a workable plan to establish efficacy for your K-12 product or service. The four sections of the guide: Getting Started; Designing the Research; Implementing the Design; and Reporting the Results break down the requirements, explain their importance, and how to structure and report the evidence of your findings.