EdTech Archives EdTech Archives Proceedings of the Learning Engineering Research Network Convening (LERN 2026)

What Works When for Whom Under What Conditions: Learning Engineering as an Enabler of Component-based Research

Chris Dede

Abstract

Component-based research (CBR) is a methodological strategy centered on studying the features and processes of innovations that contribute to desired outcomes for specific types of learner populations, conditions, and contexts. CBR focuses on precisely defined elements of overall innovations, called components, that can be studied individually or in clusters across multiple implementation sites. Compatible language in describing data, specificity in delineating the components of innovations, and processing data combined across studies in a principled manner are all attributes of learning engineering (LE) vital for CBR. Measurements and analyses can then determine not only if the innovation was effective, but also which parts of the innovation are correlated with outcomes and situations. These methodological advances depend on developing data infrastructures, enabled by artificial intelligence and nested LE, that empower component-based design, implementation, and evaluation. This is essential for personalized learning as well as for achieving scale through principled adaptation to local conditions for success based on the relative contributions of each component.

Introduction: An Aspirational Vision for Educational Research

The ability of educational research to answer the question, “What parts of an educational innovation work for whom and under what conditions?” would be a major advance in achieving educational impact (Means, 2022). Instructional designers could then evolve effective innovations targeted to specific settings and learners. Teachers could personalize interventions to individual learners, and administrators could use evidence to determine which innovations should be implemented in their context. Technical assistance providers could aid decision makers in applying the most recent research insights to their unique setting, and policy makers could develop evidence-based targeted strategies for support, regulation, and continuous improvement. Funders could allocate their resources based on proven strategies for personalization directed towards equity and excellence.

Such a systemic and aligned flow of advances can be achieved via component-based research (CBR). This methodological strategy centers on studying the features and processes of innovations that most contribute to desired outcomes for specific types of learner populations, conditions, and contexts (Century, 2022). In contrast to conventional educational research methods, CBR is based on precisely defined elements of overall innovations, called components, that can be studied individually or in clusters across implementation sites. In its research, CBR requires describing specific attributes of both settings (characteristics, conditions) and innovation participants (sociodemographic factors, self-perceptions, life experiences) that influence the outcomes of the intervention. Another condition for success of CBR is the detailed specification of desired outcomes (Century, Dede, Taylor, Brooks, Han, Scher, & Tutwiler, 2024).

Learning engineering (LE) is essential for achieving these dimensions of successful CBR. Compatible language in describing data, specificity in delineating the components of innovations, and combining data across studies in a principled manner are all attributes of LE vital for CBR. Using the full range of LE principles, attributes, and strategies, the value of educational research can go deeper in personalized interventions than determining the average effectiveness of an overall intervention across populations, conditions, and contexts (Dede, 2019). Measurements and analyses can determine not only if the innovation was effective, but also what parts of the innovation are correlated with outcomes and situations, as delineated in the related methodology of design-based implementation studies (Sabelli & Dede, 2013). LE provides an infrastructure of strategies, measures, and analytics; technical capabilities for data storage, use, and management; and evolving approaches based on advances in data science and artificial intelligence. All this enables accumulating component-based findings across studies and innovations to develop insights about what works for whom under what conditions.

This paper considers how LE can enable the efficient flow of CBR to scale innovations from effective use of data in design to implementation studies across different contexts.

Learning Engineering Databases for Component-based Research

Examples from engineering physical artifacts can illustrate the types of component taxonomies and data infrastructures needed for CBR in education. To meet the challenge of building a bridge over a body of water, engineers draw on a well-specified set of components that characterize various types of bridges (e.g., arch, beam, cantilever, suspension). Each type has its strengths and limits depending on the purpose of the bridge (i.e., heavy highway traffic, footbridges), the materials and budget available, and the attributes of the physical setting where the bridge is to be placed (e.g., water shear, substrate under foundations, length, maximum stress, minimizing harmonic vibration). Given the precise specification of these characteristics and components, engineers are able to conduct CBR to determine the conditions under which a suspension bridge will be successful at a particular site and for a designated purpose.

Current educational research struggles to do this type of CBR. Terms like “collaborative learning” are ambiguously defined and often underspecified in scholarly publications about the implementation studied. Further, going from the learning sciences to detailed design protocols is like going from the physics and chemistry involved in a bridge straight to construction site blueprints; intermediate LE heuristics and strategies (e.g., spaced reinforcement is effective in aiding retention) are essential (Goodell, Kolodner, & Kestner, 2023). Century et al (2024) provide a detailed delineation of the advances needed to fully apply learning engineering to CBR.

An Example of Applying LE to CBR: The Architecture for AI-Augmented Learning (A4L)

Educational research teams can use LE to design and develop data infrastructures that empower CBR. As an illustration, the National AI Institute for Adult Learning and Online Education (AI-ALOE) is developing a data architecture called A4L (Goel, Thajchayapong, Nandan, Sikka, & Rugaber, 2025). Across implementation sites, this pipeline can collect, combine, and analyze learning data at three levels:

  1. Micro-learning data from individual learning episodes (e.g., clickstream, engagement metrics, cognitive patterns)

  2. Meso-learning data over time (e.g., within-course student performance trends; longitudinal data about collaborative discussions and group work, learning-management-system [LMS] quizzes, feedback)

  3. Macro-learning data about longer-term educational progress (e.g., cross-course performance, multi-source integration across LMS and AI tools)

1Edtech, which is a standards-setting association, is an important partner in enabling the interoperability of the various systems that collect these types of data.

Figure. 1.

Architecture for the A4L Data Pipeline

Constructing this data architecture involved a nested LE cycle. At each stage, designers made decisions based on supports similar to a LEED tracker (Totino & Kessler, 2024). As an example, the Visualization Pipeline translates complex data outputs into intuitive, role-specific dashboards (Thajchayapong & Goel, 2025). These dashboards are accessible to instructors, learners, and researchers, and they serve as a key interface for human-AI collaboration in the classroom. Drawing on principles from the learning sciences and cognitive science (Craig et al, 2025), educators use these visualizations to inform instructional design and pedagogical decisions, and learners use customized visualizations to chart their progress and self-regulate their learning processes. The dashboard visualizations also serve as a crucial way of sustaining an ongoing feedback loop between students and teachers, enabling continuous improvement.

The LE strategy for combining various kinds of data collected across time and location can convey insights about various types of components. As an illustration, AI-ALOE uses the Community of Inquiry (COI) framework for attaining motivation and effectiveness in online and blended education (Vaughan, Dell, Cleveland-Innes, & Garrison, 2023). The three components in COI are cognitive presence, teaching presence, and social presence; each is tightly defined and has a standardized assessment. Using A4L, AI-ALOE can do CBR to determine the effect of each presence-type across learning experiences and AI tools. Early-stage research has demonstrated the technical and practical feasibility of this approach (Thajchayapong & Goel, 2025).

Achieving Personalization at Scale through CBR

The aspirational vision of CBR data pipelines such as A4L is to enable powerful strategies for personalization in learning based on understanding which combinations of components work best for whom, under what conditions. The use of generative AI is enabling new opportunities for achieving this goal. As an illustration, the analytics underlying CBR are enriched by natural language processing, and the emergence of AI-enhanced tools can enable digital communities of practice in which AI-assistants can share information and insights about individual learners across disparate learning experiences and extended timeframes.

Strategies for personalization are crucial to achieving scale for any learning innovation. Coburn (2003) defined scaling a successful innovation as encompassing four interrelated dimensions. These are 1) depth of changes to classroom practice and expectations; 2) sustainability of maintaining those changes over time; 3) spread to other practice contexts; and 4) shift in ownership of the reforms to practitioners. Dede (2006) extended Coburn’s framework by adding a fifth dimension: the evolution that develops when users adapt the innovation in ways that influence and reshape the thinking of its designers, leading to a revised core model. By fostering evolution, designers can build practitioner capabilities for effective use, can highlight the role of research as a key component of implementation, and can serve as intermediaries between more general research knowledge and localized practices (Damschroder, Reardon, Widerquist, & Lowery, 2022).

Clarke and Dede (2009) applied this framework for scale to the implementation of a middle school science curriculum across a wide range of sites. The initial stage of the scaling strategy is Depth: Understanding why an innovation works well requires discovering the role of each component in contributing to its effectiveness. Then it is important to establish, through CBR based on LE, the changes in effectiveness that occur if some of these components are weakened or omitted. This frequently happens in adapting for scale, as few contexts will have all the conditions for success that nurtured the initial implementation of the innovation.

Two of the five interrelated dimensions of this scaling framework involve adaptation to personalize the innovation to a local set of conditions. The first is Sustainability: If adopters find that they lack some of the conditions for success in the original program, with insights from CBR they can develop variations of the innovations that better fit their own situation and are sustainable. Like hybrids in agriculture, the adapted innovation may not be as effective as the original version but still can yield valuable outcomes. The second is Spread: It may be necessary—and desirable —to modify a program to reduce the level of resources needed while retaining much of its effectiveness. For example, a highly effective innovation may scale best when a somewhat less powerful but still effective version involves a more affordable level of professional development. Findings from knowledge diffusion support this piloting strategy, documenting the importance of easily implemented trial-versions of an innovation (Dearing et al, 2015).

While this discussion implies a sense of progression, the five dimensions do not articulate a linear movement through phases, since the scaling strategies are interrelated in complex ways. Instead, the framework delineates multiple processes an implementation strategy can utilize in developing sustainable scale (Dymnicki, Trivits, Hoffman, & Osher, 2020). When CBR is achieved through LE, then leaders can make educational decisions based on predictions of how well an innovation will function for various types of learners given the local presence or absence of various conditions for success. Also, CBR studies of a widely scaled innovation can yield insights about the function and impact of particular components, which can inform a broad range of future designs.

Conclusion

The microscope and telescope each enabled collecting and analyzing data that had always been there to develop new findings, such as bacterial diseases in wellness and the role of dark matter in the universe. Through AI, LE and CBR, we now have the equivalent of the microscope and telescope for learning, and exciting new understandings are likely to emerge (Dede & Lidwell, 2023). For example, randomized clinical trials using research methods that measure outcomes as group averages often conclude that an intervention makes no significant difference. With CBR and LE, even if the intervention as a whole has little impact on the group as a whole, researchers can identify components of the intervention that may have a significant effect on subgroups. This enables new pathways towards design improvement and personalization.

Acknowledgments

This work was supported by US National Science grants #2247790 and

#2112532 to the National AI Institute for Adult Learning and Online Education (aialoe.org), as well as US National Science Foundation grant #2246543 to the University of Chicago.

References

  1. Century, J. (2022). The Case for component-based research. https://datascience.uchicago.edu/wp-content/uploads/2023/03/The-Case-for- Component-based-Research.pdf
  2. Century, J., Dede, C., Taylor, J., Brooks, J., Han, D., Scher, L., & Tutwiler, S. (2024). Component-based research in education: Emerging ideas, possibilities, and next steps. University of Chicago: NSF-funded Working Meeting on Component-based Research. https://osf.io/preprints/edarxiv/vs3qw
  3. Clarke, J., & Dede, C. (2009). Design for scalability: A case study of the River City curriculum. Journal of Science Education and Technology 18(4), 353-365.
  4. Coburn, C. E. (2003). Rethinking scale: Moving beyond numbers to deep and lasting change. Educational Researcher, 32(6), 3-12. https://doi.org/10.3102/0013189X032006003 
  5. Craig, S. D., Avancha, K., Malhotra, P., C., J., Verma, V., Likamwa, R., Gary, K., Spain, R., & Goldberg, B. (2025). Using a nested learning engineering methodology to develop a team dynamic measurement framework for a virtual training environment. In International Consortium for Innovation and Collaboration in Learning Engineering (ICICLE) 2024 Conference Proceedings: Solving for Complexity at Scale (pp. 115-132). https://doi.org/10.59668/2109.21735
  6. Damschroder, L. J., Reardon, C. M., Widerquist, M. A. O., & Lowery, J. (2022). The updated Consolidated Framework for Implementation Research based on user feedback. Implementation Science, 17(1), 75. https://rdcu.be/dHAEq
  7. Dearing, J.W., Dede, C., Boisvert, D., Carrese, J., Clement, L., Craft, E., Gardner, P, Hyder, J., Johnson, E., McNeel, D., Phiri, J., & Pleil, M. (2015). How educational innovators apply diffusion and scale concepts. In Chee-Kit Looi & Laik-Woon Teh, Sustaining and Scaling Educational Innovations, pp. 81-104. New York: Springer.
  8. Dede, C. (2006). Scaling up: Evolving innovations beyond ideal settings to challenging contexts of practice. In R.K. Sawyer (Ed.), Cambridge Handbook of the Learning Sciences, pp. 551-566. Cambridge, England: Cambridge University Press.
  9. Dede, C. (2019). Improving efficiency and effectiveness through learning engineering. In C. Dede, J. Richards, & B. Saxberg, (Eds.). (2019). Learning engineering for online education: Theoretical contexts and design-based examples, pp.1-14. New York: Routledge.
  10. Dede, C., & Lidwell, W. (2023). Developing a next-generation model for massive digital learning. Education Sciences 13(8), 845-854. https://doi.org/10.3390/educsci13080845
  11. Dymnicki, A., Trivits, L., Hoffman, C., & Osher, D. (2020). Advancing the use of core components of effective programs: Suggestions for researchers publishing evaluation results. Office of Assistant Secretary for Planning and Evaluation, US Department of Health and Human Services. https://youth.gov/sites/default/files/ASPE-Brief_Core- Components.pdf
  12. Goel, A., Thajchayapong, P., Nandan, V., Sikka, H., & Rugaber, S. (2025). A4L: An architecture for AI-augmented learning. Computers and Society. https://doi.org/10.48550/arXiv.2505.06314
  13. Goodell J., Kolodner, J., & Kessler, A. (2023). Tools from the learning sciences. In J. Goodell & J. Kolodner (Eds.), Learning Engineering Toolkit, pp. 243-253. New York, NY: Routledge.
  14. Means, B. (2022). Making insights from educational psychology and educational technology research more useful for practice. Educational Psychologist, 57(3), 226–230. https://doi.org/10.1080/00461520.2022.2061974
  15. Sabelli, N., & Dede, C. (2013). Empowering design-based implementation research: The need for infrastructure. In B. J. Fishman, W.R. Penuel, A-R Allen, & B.H. Cheng (Eds.), Design-based implementation research: Theories, methods, and exemplars (National Society for the Study of Education, Volume 112, Issue 2), pp. 464-480. NY, NY: Teachers College, Columbia.
  16. Thajchayapong, P., & Goel, A. K. (2025). Personalized learning through AI-driven data Pipeline. Proceedings of the AAAI Symposium Series, 5(1), 111-114. https://doi.org/10.1609/aaaiss.v5i1.35572
  17. Totino, L., & Kessler, A. (2024). “Why did we do that?” A systematic approach to tracking decisions in the design and iteration of learning experiences. The Journal of Applied Instructional Design, 13(2) https://doi.org/10.59668/1269.15630
  18. Vaughan, N.D., Dell, D., Cleveland-Innes, M., & Garrison, R.D. (2023). Principles of blended learning: Share metacognition and communities of inquiry. Toronto, CA: AU Press.