EdTech Archives EdTech Archives Proceedings of the Learning Engineering Research Network Convening (LERN 2026)

MIRANDA: Real-Time Learning Analytics for Authentic Embedded Assessment

Mark Ollila, Elina Ollila, & Kenneth Jones

Abstract

Assessments have played a significant role in all forms of academic learning. Students continue to use standardized formats and decontextualized measures that effectively assess learners outside the learning process (Pellegrino et al., 2001). Scholastic assessments seldom capture complex cognitive processes such as reflection, problem solving, collaborative reasoning, and adaptive decision-making. These tenets are essential for academic growth and development, preparing students for workforce success.



The Solution: Introducing MIRANDA: A Real-Time Learning Architecture

Scholastic assessments have served as the foundation for educators to understand how students develop, refine, and apply new knowledge. Retention and transfer indicators remain critical, but they are strengthened by continuous learning analytics that demonstrate how students reason, reflect, and build disciplinary expertise. Research in game-based learning demonstrates the value of such assessment-informed environments. Karsenti and Parent’s study on teaching history with Assassin’s Creed shows that students achieve deeper historical understanding and report increased engagement when instruction integrates interactive, authentic tasks supported by effective teaching practices (Karsenti & Parent, 2019).

Furthermore, traditional methods of academic assessment are provided only after the learning task is complete. The approach limits students' opportunities to iterate and grow in their learning (Black & Wiliam, 2009). For many students, specifically those who have historically been underserved, this approach lowers self-efficacy and diminishes the individual's agency (Nasir, 2011). When a learner is unable to determine the academic progress made during an authentic activity, their ability to demonstrate competence as a learner is challenged.

MIRANDA (Multimodal Intelligent Recognition and Assessment for Next Generation Digital Accreditation) is an AI system designed to capture real-time learning assessment by recording and analyzing a user's learning process during gameplay, and identifying emerging skills and cognitive strategies. MIRANDA provides transparent, actionable insights about growth and competencies, supporting academic mastery and learner identity development.

Inquiry Focus: Investigating Metacognition and Competency Indicators

  1. How can MIRANDA's real-time, transparent embedded assessment help users develop metacognitive awareness and academic self-efficacy during learning activities?

  2. How do MIRANDA's learning analytics indicators precisely reflect students' evidenced competencies in problem-solving, collaboration, and conceptual understanding, compared to standardized post-instruction assessments?

Implementation Strategy: Participants, Settings, and the 12-Week Protocol

Research Design: This mixed-methods study examines MIRANDA's impact on learning, metacognitive awareness, and academic competencies. The quantitative method will be applied to trace learning analytics indicators, which will be analyzed alongside qualitative interviews, classroom observations, and learners’ reflections, providing a barometer for behavioral changes and emerging academic identities.

Participants and Setting: The study will be conducted across Mesa Public Schools, Chicago Catholic Schools, and ASU Preparatory Academy, involving approximately 8–12 teachers and 200–350 students in Grades 2–8 engaged in domain-based, exploratory learning activities.

Procedure: Teachers will receive professional learning on integrating MIRANDA within inquiry-based units. During a 10–12-week implementation, students will use MIRANDA in digital learning environments during regular instruction. MIRANDA will analyze student action traces to identify demonstrated competencies and produce a comprehensive assessment of the learning analytics indicators. Educators will access dashboards summarizing learner progress.

The Feedback Loop: Expected Gains in Learner Agency and Self-Efficacy

Because the study has not yet been implemented, the results described here represent expected outcomes based on prior design work, MIRANDA's functionality, and established learning-science research. MIRANDA is expected to generate substantial learning traces documenting students’ strategies, decisions, and cognitive processes during inquiry-based digital tasks.

We anticipate observing strong correspondence between MIRANDA’s automated competency identifications and instructor judgments, indicating the system’s effectiveness as an embedded assessment tool. We project that MIRANDA’s explanatory feedback will foster engagement, reflection, and self-efficacy, with learners using the system’s language to describe the strategies they employ, thereby developing emerging competencies. The adjustments will align with the research on the benefits of real-time feedback for agency development, specifically for historically underserved learners (Black & William, 2009; Nasir, 2011).

Teachers are anticipated to report that MIRANDA provides visibility into cognitive processes that are difficult to capture through traditional assessments. These insights may support differentiated instruction by helping educators identify patterns in planning, problem-solving, and collaboration. Classroom observations are also expected to show increased peer interaction, longer sustained focus, and persistence during MIRANDA-supported sessions. Collectively, the anticipated results will offer preliminary evidence that MIRANDA can deliver reliable real-time analytics, strengthen identity-building feedback, and promote productive learning behaviors.

Implications for Practice: Transforming Knowledge-in-Action into Actionable Data

In conclusion, the forthcoming empirical findings are expected to show that MIRANDA is fully capable of transforming how all forms of learning are assessed in a digital environment. If the concordance between MIRANDA and teacher assessments is demonstrated as expected, the study will establish early support for the credibility of embedded, real-time evaluation. Such evidence advances the learning-sciences perspective that assessments should capture knowledge-in-action rather than rely solely on post-hoc measures (Pellegrino et al., 2001).

For educators, MIRANDA’s dashboards are expected to reveal patterns of reasoning that are often invisible in classroom settings, enabling more targeted instructional responses. MIRANDA aligns with Black and Williams’s (2009) claim that formative assessment is most powerful when feedback is immediate and actionable. By reducing the delay between performance and teacher intervention, MIRANDA may improve the quality of classroom discourse around thinking and strategy use.

For students, MIRANDA will offer reflections that highlight strengths, emerging competencies, and next steps. We anticipate this feedback will fully support intrinsic motivation, academic confidence, and a clear understanding of their learning capabilities. In addition, learning engineers and policymakers will recognize MIRANDA as a replicable architecture that effectively aligns with the “Portrait of a Graduate” framework model.

Finally, the comprehensive data for this study are forthcoming, MIRANDA’s projected outcomes point to the high probability of a shift in assessment from a static judgement to a participatory, formative process that establishes all learners as contributors to their academic growth and development.

References

  1. Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31.
  2. Gee, J. P. (2007). What video games have to teach us about learning and literacy. Palgrave Macmillan.
  3. Karsenti, T., & Parent, S. (2019). Teaching history with the video game Assassin’s Creed: Effective teaching practices and reported learning. Journal of Educational Research and Practice, 9(1), 20–38.
  4. Nasir, N. (2011). Racialized identities: Race and achievement among African American youth. Stanford University Press.
  5. Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. National Academies Press.
  6. Steinkuehler, C., & Squire, K. (2014). Videogames and learning. Review of Educational Research, 84(1), 74–103.