Introduction
Advances in educational technology have reshaped how students in higher education access, and interact with learning content. Yet this shift also presents a challenge as learning environments become increasingly digital and self-directed, ensuring that students remain actively engaged. Within Active L@S, this challenge underscores the need for instructional approaches that promote agency, and sustained interaction with course material. Mobile learning is one such method, offering a scalable way to extend learning beyond traditional classroom boundaries. Mobile learning enables instruction that is flexible, self-paced, and responsive to diverse learner needs, reflecting a broader shift from traditional, in-person models toward technology-enhanced education. While mobile platforms expand access and convenience, their pedagogical impact depends on how well they support meaningful learning. Active learning, defined by student engagement, reflection, and learner agency, has emerged as a critical benchmark because it fosters deeper cognitive processing and improved learning outcomes (Miyatsu, Nguyen, & McDaniel, 2018; Roscoe & Craig, 2022). This makes it essential to evaluate whether mobile learning tools are strategically supporting learners, rather than just delivering content.
Despite the growing adoption of mobile learning tools, structured and research-based approaches for evaluating their pedagogical alignment remain limited. Existing frameworks tend to prioritize usability, access, and content delivery while overlooking the deeper cognitive and behavioral processes that support sustained learning. As a result, many mobile learning tools remain supplementary rather than strategically integrated into instruction (Rangel-de Lazaro and Duart, 2023) and common barriers, such as insufficient instructional design and lack of alignment with evidence-based strategies continue to limit their effectiveness (Sophonhiranrak, 2021). Foundational models, such as Ozdamli and Cavus’s (2011) framework for mobile learning elements, highlight the importance of harmonizing learner needs, content delivery, instructional roles, learning environments, and assessment, yet no widely adopted evaluation tool has operationalized has operationalized these principles for practical use by educators and designers.
Learning Strategy Analysis (LSA) addresses this gap by providing a structured, research-informed methodology for assessing the pedagogical alignment of digital learning systems with evidence-based instructional strategies (Villa, Craig, Zakhidov, & Zielke, 2021). Situated within the Learning Engineering (LE) process, LSA was employed in this study as a nested analytic cycle (Craig et al., 2025) within the Challenge phase to inform the design of a heuristic checklist applicable to mobile learning platforms. Specifically, LSA guided the identification and operationalization of instructional strategies such as elaboration, self-explanation, multimedia practice, or spaced practice, strategies known to foster deeper engagement and mastery. Using LSA-informed heuristics, checklists, and rubrics, the resulting framework enables systematic identification of pedagogical strengths and gaps in mobile learning applications, supporting evidence-based decision-making, and learner-centered design. By integrating LE and LSA, this approach bridges the gap between technological deployment and pedagogical integrity, providing a practical tool for evaluating and improving the instructional quality of mobile learning environments.
This study was conducted collaboratively by two researchers and employed a two-phase methodology to develop a pedagogically grounded framework for evaluating mobile learning applications. Guided by principles of active learning and evidence-based instructional design, the research aimed to create a practical heuristic tool that educators and designers can use to assess whether mobile platforms meaningfully support learner engagement, cognitive processing, and instructional effectiveness.
The methodology involved two phases. Phase 1 involved a targeted literature review on mobile learning and active learning strategies to identify relevant pedagogical constructs (e.g., elaboration, note-taking, and multimedia learning) and establish foundational design principles. Phase 2 synthesized these insights into an eight-category heuristic framework, covering areas such as Active Learning, Learning with Voices/Audio, Content Access, Engagement and Interactivity, Communication and Collaboration, Digital Divide, Instructor Readiness and Feasibility, and Feedback. Each category was further divided into sub-heuristics that include actionable design recommendations. These heuristics function both as evaluative criteria for identifying areas of improvement and as guidelines for designing pedagogically grounded mobile learning tools. While not all strategies must be present in a given system, the inclusion of any single sub-heuristic, or a meaningful combination of them, can enhance active learning.
As outlined above, we developed eight heuristic categories for evaluating mobile learning systems. The following section provides definitions of each heuristic and describes its corresponding sub-heuristic categories.
Active learning refers to instructional methods that engage learners in the process of knowledge construction through different activities. The Active Learning heuristic emphasizes subheuristics that prompt learners to construct knowledge, including note-taking, self-explanation, summarization, and question-answering (Miyatsu, Nguyen, & McDaniel, 2018; Roscoe & Craig, 2022). Mobile systems should support structured, paraphrased note-taking (Van Wyk & Van Ryneveld, 2018); prompt learners to explain concepts in their own words beyond multiple-choice (Bisra et al., 2018); enable meaningful summarization rather than copy-paste behavior; and incorporate reflective, open-ended questions that promote elaborative reasoning. Questions that prompt elaborative reasoning are more effective in assessing student understanding than factual or binary questions (Hernández-de-Menéndez et al., 2019). These strategies collectively foster deep cognitive processing, sustained engagement, and improved learning outcomes.
This heuristic emphasizes the pedagogical value of audio-based learning in mobile contexts. Effective systems should incorporate sub-heuristics such as delivering high-quality audio, providing playback controls (e.g., pause, rewind, and timestamp tagging) that support review and note-taking without disrupting the listening experience (Schroeder et al., 2020), and integrating speech-to-text features for accessibility needs (Craig & Shroeder, 2019). These elements make audio a powerful tool for mobile learning environments.
This heuristic refers to how easily learners can access both platform-provided content and their own user-generated content (e.g., notes, highlights). Mobile systems should offer sub-heuristics such as seamless synchronization across devices (Kiewra et al., 1991), timestamp-linked navigation between notes and content (Kiewra et al., 1991), and personal archives that are searchable and easy to review (Kiewra et al., 1991). Such features support autonomy, revisitation, and reflection, key components of self-regulated learning.
Heuristic This heuristic focuses on learner engagement (i.e., motivational and behavioral involvement in learning) and interactivity (i.e., learner’s ability to manipulate content or receive feedback from the system). Interactive elements, when grounded in pedagogy, can help maintain motivation and foster persistence. Gamification elements such as badges or points can enhance motivation (Sophonhiranrak, 2021), while opportunities for simulations or applied tasks help learners connect knowledge to authentic contexts (Rangel-de Lazaro & Duart, 2023; Ozdamli & Cavus, 2011). Together, these features can sustain attention in mobile environments where distractions are common.
This heuristic assesses the system’s capacity to support interpersonal learning, peer exchange, and instructor-learner interaction. Such interaction reinforces understanding, supports metacognition, and builds learning communities. Effective platforms enable peer interaction through shared notes, discussion spaces, or review tools, and support instructor communication through messaging or comments, and finally integrations with third-party tools like WhatsApp or YouTube expand opportunities for interaction, fostering learning communities and deeper understanding (Sophonhiranrak, 2021; Rangel-de Lazaro & Duart, 2023; Ozdamli & Cavus, 2011).
This heuristic examines the platform’s capacity to accommodate diverse learners, particularly those with limited digital resources. Systems should function smoothly across a range of devices and operating systems (Sophonhiranrak, 2021; Ozdamli & Cavus, 2011), offer offline functionality for core features, and provide multilingual support (Sophonhiranrak, 2021; Ozdamli & Cavus, 2011). These elements help ensure that mobile learning reduces rather than reinforces inequities.
This heuristic evaluates how easily instructors can adopt and integrate the platform into their teaching practice. Effective systems provide intuitive onboarding resources (Sophonhiranrak, 2021; Rangel-de Lazaro & Duart, 2023), support flexible content authoring, and ensure compatibility to integrate with common learning management systems (Ozdamli & Cavus, 2011).
Feedback refers to the quality and timeliness of information provided to learners about their performance and engagement. Mobile systems should offer appropriately timed feedback, clear and constructive explanations (Ruth et al, 2021; Shute, 2008), and options for learners to choose the depth and frequency of feedback they receive. When given a choice, learners are more likely to view feedback and show more positive engagement with the system (Kuklick, 2025). Support for dialogue-based feedback further enhances engagement and reflection. Together, these features promote self-regulation, deeper learning, and sustained use.
Following the heuristics described above, we created a four-level rubric for each sub-heuristic. The four levels are: No Implementation (0), indicating the feature is entirely absent; Partial Implementation (1), indicating minimal or incomplete presence; Adequate Implementation (2), indicating functional but not fully optimized support; and Full Implementation (4), indicating comprehensive, well-integrated pedagogical alignment. For example, the “prompts for self-explanation” sub-heuristic in Active Learning heuristic ranges from no prompts for learners to articulate understanding, to occasional generic prompts, to regular but non-adaptive prompts, and finally to fully embedded, context-aware prompts that encourage deep elaboration. For a full list of rubrics, check here.
The rubric offers actionable metrics with clear criteria to assess how well an app supports learner engagement, accessibility, and instructional integration. A total score out of 28 indicates overall alignment, with higher scores reflecting stronger support for effective, equitable, and interactive mobile learning experiences.
This study presents a structured, pedagogically grounded heuristic framework for evaluating mobile learning applications. By integrating evidence-based instructional principles with a four-level evaluative rubric, ranging from no implementation to full implementation, the framework enables systematic assessment of both the presence and quality of learning-supportive features. The application of this framework can demonstrate its practical value as both a diagnostic tool and a design guide, helping educators and developers identify strengths, address gaps, and enhance the instructional integrity of mobile platforms. Ultimately, this approach supports the creation of mobile learning systems that move beyond content delivery to foster deeper engagement, learner autonomy, and meaningful, high-impact learning.
The research reported here was partially supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305T240035 to Arizona State University. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.