The Solution: Introducing MIRANDA: A Real-Time Learning Architecture
Scholastic assessments have served as the foundation for educators to understand how students develop, refine, and apply new knowledge. Retention and transfer indicators remain critical, but they are strengthened by continuous learning analytics that demonstrate how students reason, reflect, and build disciplinary expertise. Research in game-based learning demonstrates the value of such assessment-informed environments. Karsenti and Parent’s study on teaching history with Assassin’s Creed shows that students achieve deeper historical understanding and report increased engagement when instruction integrates interactive, authentic tasks supported by effective teaching practices (Karsenti & Parent, 2019).
Furthermore, traditional methods of academic assessment are provided only after the learning task is complete. The approach limits students' opportunities to iterate and grow in their learning (Black & Wiliam, 2009). For many students, specifically those who have historically been underserved, this approach lowers self-efficacy and diminishes the individual's agency (Nasir, 2011). When a learner is unable to determine the academic progress made during an authentic activity, their ability to demonstrate competence as a learner is challenged.
MIRANDA (Multimodal Intelligent Recognition and Assessment for Next Generation Digital Accreditation) is an AI system designed to capture real-time learning assessment by recording and analyzing a user's learning process during gameplay, and identifying emerging skills and cognitive strategies. MIRANDA provides transparent, actionable insights about growth and competencies, supporting academic mastery and learner identity development.
Inquiry Focus: Investigating Metacognition and Competency Indicators
How can MIRANDA's real-time, transparent embedded assessment help users develop metacognitive awareness and academic self-efficacy during learning activities?
How do MIRANDA's learning analytics indicators precisely reflect students' evidenced competencies in problem-solving, collaboration, and conceptual understanding, compared to standardized post-instruction assessments?
Implementation Strategy: Participants, Settings, and the 12-Week Protocol
Research Design: This mixed-methods study examines MIRANDA's impact on learning, metacognitive awareness, and academic competencies. The quantitative method will be applied to trace learning analytics indicators, which will be analyzed alongside qualitative interviews, classroom observations, and learners’ reflections, providing a barometer for behavioral changes and emerging academic identities.
Participants and Setting: The study will be conducted across Mesa Public Schools, Chicago Catholic Schools, and ASU Preparatory Academy, involving approximately 8–12 teachers and 200–350 students in Grades 2–8 engaged in domain-based, exploratory learning activities.
Procedure: Teachers will receive professional learning on integrating MIRANDA within inquiry-based units. During a 10–12-week implementation, students will use MIRANDA in digital learning environments during regular instruction. MIRANDA will analyze student action traces to identify demonstrated competencies and produce a comprehensive assessment of the learning analytics indicators. Educators will access dashboards summarizing learner progress.
Because the study has not yet been implemented, the results described here represent expected outcomes based on prior design work, MIRANDA's functionality, and established learning-science research. MIRANDA is expected to generate substantial learning traces documenting students’ strategies, decisions, and cognitive processes during inquiry-based digital tasks.
We anticipate observing strong correspondence between MIRANDA’s automated competency identifications and instructor judgments, indicating the system’s effectiveness as an embedded assessment tool. We project that MIRANDA’s explanatory feedback will foster engagement, reflection, and self-efficacy, with learners using the system’s language to describe the strategies they employ, thereby developing emerging competencies. The adjustments will align with the research on the benefits of real-time feedback for agency development, specifically for historically underserved learners (Black & William, 2009; Nasir, 2011).
Teachers are anticipated to report that MIRANDA provides visibility into cognitive processes that are difficult to capture through traditional assessments. These insights may support differentiated instruction by helping educators identify patterns in planning, problem-solving, and collaboration. Classroom observations are also expected to show increased peer interaction, longer sustained focus, and persistence during MIRANDA-supported sessions. Collectively, the anticipated results will offer preliminary evidence that MIRANDA can deliver reliable real-time analytics, strengthen identity-building feedback, and promote productive learning behaviors.
In conclusion, the forthcoming empirical findings are expected to show that MIRANDA is fully capable of transforming how all forms of learning are assessed in a digital environment. If the concordance between MIRANDA and teacher assessments is demonstrated as expected, the study will establish early support for the credibility of embedded, real-time evaluation. Such evidence advances the learning-sciences perspective that assessments should capture knowledge-in-action rather than rely solely on post-hoc measures (Pellegrino et al., 2001).
For educators, MIRANDA’s dashboards are expected to reveal patterns of reasoning that are often invisible in classroom settings, enabling more targeted instructional responses. MIRANDA aligns with Black and Williams’s (2009) claim that formative assessment is most powerful when feedback is immediate and actionable. By reducing the delay between performance and teacher intervention, MIRANDA may improve the quality of classroom discourse around thinking and strategy use.
For students, MIRANDA will offer reflections that highlight strengths, emerging competencies, and next steps. We anticipate this feedback will fully support intrinsic motivation, academic confidence, and a clear understanding of their learning capabilities. In addition, learning engineers and policymakers will recognize MIRANDA as a replicable architecture that effectively aligns with the “Portrait of a Graduate” framework model.
Finally, the comprehensive data for this study are forthcoming, MIRANDA’s projected outcomes point to the high probability of a shift in assessment from a static judgement to a participatory, formative process that establishes all learners as contributors to their academic growth and development.