The Learning Engineering Tools Competition is a multi-million dollar funding opportunity for edtech innovation that leverages digital technology, big data, and learning science to meet the urgent needs of learners worldwide. (Tools Competition, 2024). Since its launch, the competition has awarded over $17.5 million and supported the development, growth, and transformation of more than 130 teams from 44 countries[1]. The winning tools address learning across the lifespan and in classroom, home, and workforce contexts.
We have begun to conduct annual impact studies of this competition that assess how different winning teams define and measure their impact on teachers, learners, and communities. Given that Learning Engineering is still emerging and defining itself as a field (Goodell & Kolodner, 2022), we are particularly interested in how the competition’s focus on Learning Engineering shapes the work of diverse teams and identifying how teams are responding to or addressing opportunities and challenges of Learning Engineering (Baker, Boser & Snow, 2022).
In this paper, we present an initial evaluation of 32 diverse teams from the 2023 Tools Competition. Our analysis draws from surveys and interviews conducted with these teams at the start and mid-way through the year to examine: 1) teams’ initial perceptions about LE, 2) how opportunities presented by the Tools Competition evolved teams’ understanding of LE, 3) how diverse teams implemented principles of LE into their research and tool development, and 4) how these tools are expected to impact the broader LE community. In doing so we further work at the intersection of LE, learning sciences, and education technology to facilitate discussions about theoretical, human-centered, and practical aspects of conducting LE work and to inform the development of future iterations of the Tools Competition.
The data reported in this paper was gathered from the 32 winning teams from the 2023 Learning Engineering Tools Competition. Specifically, we report on data collected through surveys and semi-structured interviews with all teams at the start of the funding year (September) and six months later at the halfway point (March). The data presented here are part of a larger and ongoing effort to support the teams to develop their tools and document their impact on learners, the EdTech field, and the LE community.
In the initial data collection surveys and interviews, teams were asked several questions including ones focused on understanding their familiarity with LE and plans for engaging in LE work. Mid-year surveys and interviews revisited these questions while asking teams to reflect on how working through the project had helped to shape, refine, or change their thinking about learning engineering.
Interviews were recorded and transcribed by the research team. These responses and survey responses were reviewed by all members of the research team. We qualitatively coded open-ended comments using inductive thematic analysis to develop initial thematic categories related to our research questions and interests (e.g., Braun & Clarke, 2006). Using constant comparison, we iteratively developed, refined, and tested these initial categories with our larger research team for a period of six months. In this paper, we report on a preliminary set of findings from this thematic analysis to support future work.
In our initial surveys and meeting with teams, it was apparent that many of the teams were unfamiliar with LE prior to applying to the competition. Multiple teams acknowledged that they did not have a working definition of LE at the start of the funding cycle and less than half of the teams had interacted with external researchers with backgrounds in LE, the learning sciences, or other related fields.
When asked How is your tool enabling researchers and engaging with the broader learning engineering community? many teams cited a linear and unidirectional understanding of LE. That is, they interpreted the goal of the LE efforts to be releasing their data to researchers for additional analysis, rather than the sort of iterative or cyclical process typically associated with LE. Few teams mentioned plans for connecting with those researchers to either “close the loop” so that the findings could support refinements of their tool or to involve researchers in the larger iterative design process.
At the mid-year surveys and interview, all but four teams indicated that the involvement in these projects and the Tools Competition community had evolved their thinking about LE. Examining teams’ responses illustrated how the emphasis on LE in the Tools Competition had affected successes and challenges in their research and development process in several ways detailed below.
By mid-year, most teams reflected that they were more aware of LE as a field or discipline rather than as a specific task. One team member noted that becoming familiar with LE “...helped give me the vocabulary [to] think about a lot of the things that we're thinking about.” Similarly, a team wrote, “We have learned quite a bit of terminology about different kinds of research and we have begun to prioritize in-house research studies as well as seek out research partners”.
As another team noted:
We've come to appreciate LE as a more holistic discipline that not only involves the design and technological aspects but also deeply integrates pedagogical theories, learner feedback, and iterative improvement based on data-driven insights […] This broader perspective has informed our approach to the tool, emphasizing the importance of stakeholder collaboration, adaptability, and continuous refinement to meet learners' evolving needs.
This reconceptualization demonstrates the team’s developing understanding of the interdisciplinary nature of LE, particularly how aspects of the learning sciences (e.g., pedagogical theory, iterative improvement) can provide guidance for research and development. The team member emphasizes top-down (theory) and bottom-up (data-driven) aspects of development alongside recognition that the learners’ experience and needs are core to how the tool should be developed. Other teams highlighted that focus on LE allowed them to further “empower relevant stakeholders.” Collectively, teams shared their increased efforts to seek out LE collaborators by presenting at conferences and consortiums or joining networks to share their work.
Recent work has highlighted that many EdTech tools fail to make an impact because they are often developed and then the team seeks end-user or stakeholder feedback as part of final steps (McCarthy & E. Yan, 2024; L. Yan et al., 2023). That is, they are built for learners by designers rather than with the learners and the practitioners who use the tools. Involving stakeholders earlier and more deeply increases the likelihood that the tool will be usable, interesting, and relevant to the educational context. This, in turn, supports more use and more opportunity for impact.
Responses from the teams at the mid-year evaluation suggested that the focus on LE had encouraged them to move away from “getting feedback” to more collaborative, user-centered, or participatory efforts. For example, as one team noted “We’ve been having more conversations with our educators and creating relationships with them earlier than we would have previously.”
Several teams illustrated what we characterize as expanding conceptions about data, data use, and data privacy as a result of engaging with principles of LE. For example, one team shared, “Prior to this competition we had assumed that due to the confidential nature of our data, sharing was impractical. However the potential of our AI [data] has become increasingly clear.” This team summarized the sentiments of several teams whose interest in developing datasets focused on preserving data privacy and other principles of LE grew significantly as a result of participating in the Tools Competition. Similarly, another team reported, “We have realized that qualitative data (e.g. our data from user interviews) is super valuable and maybe more valuable than our quantitative data because it helps [us] know why and how our tool supports learning rather than just if it does.” Likewise, another team shared, “Initially, [we thought] that LE had to be done in a formal manner […] now I understand that it can be implemented in a more informal manner and it should be central to a companies [sic] development, especially in the early stages.” These statements underscore how some teams’ understanding of the kinds of data LE entailed (e.g., qualitative or quantitative data) and how such data can inform product development evolved considerably as a result of participating in the Tools Competition.
While teams were excited to apply principles of LE to development, they also struggled to integrate LE principles alongside creating and sustaining iterative development cycles within their tight timelines, existing roadmaps, and limited budgets. While the competition encouraged teams to onboard external LE researchers to kickstart this type of work, many teams indicated that it was difficult for both the tool team and the external researcher to “make space” for these efforts. Likewise, even with teams who were able to leverage their understanding of LE to begin to shift their approach to research and development, some teams still instantiated and articulated uni-directional views of LE. For example, several teams continued to view LE as a means to “provide data” as opposed to providing opportunities for co-design and partnerships through data. Similarly, few tools allowed researchers to author their own content or change features, limiting the extent to which tools could serve as test beds.
However, these appear to be limitations as a result of a short timeline rather than an unwillingness to engage with LE principles. For example, one team shared that while they had yet to deeply integrate or engage with LE principles, they had developed significant interest to instrument their tool in the future in ways that would support greater collaboration and opportunities for research stating, “part of what we're also kind of keen to do is use these couple of years to internally start to prototype research methodologies or ways in which we can run quick studies internally - which are more robust than just doing a baseline/endline and seeing how the kids are moving […] And another goal was, can some of this kind of A/B testing be built into the tool.”
In summary, we reported on an initial set of findings from one iteration of the annual Tools Competition. Our findings illustrate teams’ initial perceptions and reflections about LE and how these perceptions and reflections: 1) quickly evolved through participation in the Tools Competition and 2) informed their R&D.
Two key themes emerge from our findings that extend prior work on LE. First, the Tools Competition has provided essential support for addressing critical challenges and leveraging opportunities as defined by Baker, Boser & Snow, 2022. Specifically, it has offered resources to tackle issues such as data infrastructure, evaluation methodology, and networking, enabling teams to collaboratively scale technical solutions and develop more context-sensitive tools and products. At the same time, teams continue to face persistent challenges, such as forming partnerships with schools and teachers, integrating data effectively across school and industry, and conducting longitudinal studies that draw from multiple data sources. Such barriers should remain central concerns for future LE work focusing on developing tools and products that support long-term student achievement.
Second, our findings underscore a notable shift in teams’ perceptions of LE, recognizing its alignment with human-centered principles, as emphasized by Goodell, Kessler & Schatz (2023), Kessler et al., (2022) and Thai et al. (2022). Particularly, teams from diverse backgrounds explored and found ways to integrate LE principles into various human-centered and participatory design processes. Many emphasized how LE frameworks enhanced the human-centered nature of their work, fostering greater collaboration and a stronger focus on end users—particularly teachers and students. We see this as an ongoing opportunity to examine how LE-aligned changes impact R&D practices and ultimately how these efforts affect learning.
These findings further underscore several questions for future research for the Tools Competition and LE research. Such questions include: How do teams' perceptions of LE continue to evolve beyond the Tools Competition? What specific factors contribute most to teams adopting LE principles in their work? What kinds of partnerships (e.g., with schools, ed-tech companies, policymakers) are most effective for sustaining LE-driven solutions? How do different competition formats or mentorship structures influence teams' ability to apply LE principles?
As described previously, our work is ongoing and has several limitations that we aim to address through future work and more robust and large scale analyses. Such analyses include broader analyses of initial, mid-year, and end-of-year surveys and interviews with these teams as well as with teams from subsequent years of the Tools Competition to develop more robust understandings of how diverse teams understand and adopt LE principles.
[1] https://tools-competition.org/
This impact evaluation is funded by Tools Competition sponsors. Learn more at https://tools-competition.org/.