EdTech Archives EdTech Archives The Journal of Applied Instructional Design, 13(3)

What Does the Process Look Like?: A Conceptual Framework Outlining a Virtual Simulation and Simulator Design Process for Instructional Designers

Natalie Perez

Abstract

Innovations in digital, interactive learning resources are increasingly being produced for online learning or e-learning. One area that is growing as a result of the increase in learning resources is computer-based simulations. Interactive computer simulations and simulators for education can offer similar traditional hands-on opportunities for learners to engage in learning activities. However, less is known about the design process for simulation and simulator creation as an instructional strategy. Using a method theory approach, this study adapted strategies from Tawfik et al.’s (2019) systematic literature review to deliberately select design-based research (DBR) principles and collect data for the study from a variety of scholarly theories, concepts, and practices for virtual simulation development to create a conceptual framework for instructional designers. The framework uses a mixture of canonical DBR models offers a new perspective on the simulation and simulator design and development process for instructional designers. Simulator designers should consider end-user needs and organizational contexts and goals, engage in multiple cycles of review and development, and integrate a range of experts to enhance the design and development process.

Introduction

E-learning within educational and corporate organizations is not a new phenomenon (Chen, 2008), but during 2020, the COVID-19 global pandemic unexpectedly forced a number of corporations’ workforces online (Torrance, Bozarth & Jackson, 2020). This shift online impacted learning and development programs, as traditional-based learning settings transitioned to asynchronous, synchronous, or hybrid formats (Torrance, Bozarth & Jackson, 2020). Not surprisingly, the pandemic increased the interest and use of e-learning (Gamage, 2020) and, particularly, virtual simulations (Correa, 2020). The curiosity in emerging technology has expanded to a variety of fields (Torrance, Bozarth & Jackson, 2020), with several industries focused on exploring the use of virtual simulations and simulators to benefit employee performance development (Frank et al., 2022). However, with ever-evolving technological advancements in software and hardware, and persisting interest in simulations and simulators as viable e-learning solutions, there is a gap in literature pertaining to the simulation/simulator design process (Cernusca & Mallik, 2017; Fink et al., 2021).

Simulation research continues to increase, and many studies illustrate the numerous benefits of using simulations for learning, such as supporting psychomotor skill development, development of contextual knowledge through repeated practice, or improving design-making processes (Plotzky et al., 2021; Meiers & Russell, 2019). With continued use and development of simulations to aid learning outcomes and support hands-on learning, professional development, and other training needs (Alam, 2023), several studies have used design-based research (DBR) strategies to develop simulations (Momand et al., 2022; Ivens & Oberle, 2020; Baloyi et al., 2017; Hossain et al., 2018). These studies have reinforced the value of using DBR to enhance simulations. However, while scholars have argued for the value of using DBR to enhance simulations, there is a lack of applied research detailing the ways designers can strategically use DBR to create simulations for learning (Cernusca & Mallik, 2017; Badiee & Kaufman, 2015; Koivisto et al., 2018). Simulation guidance can aid designers, since virtual simulation design and development is challenging. For instance, the purpose and function of virtual simulations is unique, and designers can benefit from foreknowledge and when to use or not use a virtual simulation (Salimova et al., 2023; Gmeiner, 2023). Fidelity (Chernikova et al., 2020), context (Carruth, 2017), tasks (Grossman et al., 2009), and scenarios (Pappas, 2016) are also important elements to simulations/simulators, and designers can benefit from targeted insights into the design and development process. 

Differences in learning objectives, contexts, and fields can influence differences in simulator design principles or strategies, but overriding design and development similarities indicates the value in a centralized simulation design-based research framework. For instance, in the medical field patients are often the key focus of simulations (Salimova et al., 2023), but in the aviation field, pilot tasks, visibility, and maneuverability are often key simulation features (Gmeiner, 2023); differences in purpose and contexts of simulations often influence the degree of fidelity needed, settings, actors, and scenarios. Regardless, there are a number of similarities across simulation design and development (e.g., intervention goals, learner characteristics, design elements, etc.), whether designing for medical, aviation, law enforcement, customer service or the like. A centralized design-based research approach that can be adapted for a variety of fields can benefit designers seeking to identify important design-based research practices for simulations/simulators. In addition, with developments in technology, such as AR/VR/GenAI, some studies suggest virtual simulations are increasingly being explored or developed (Shorey & Ng, 2021). Consequently, a virtual simulation-based DBR model, compared to traditional DBR models, might be more beneficial to designers seeking to design and develop a simulation/simulator by surveying simulation elements embedded in a design-based research framework like when simulations might be most beneficial to use as an intervention, key learner characteristics to consider with simulations/simulators, types of simulations/simulators, simulation-design checklists, etc.

The purpose of this article is to illustrate a simulation and simulator design process, using a conceptual design framework built from a synthesis of four DBR models, to provide instructional designers with a systematic method for virtual simulation/simulator design and development. Drawing from literature addressing DBR models, simulation principles, and practitioner experiences, this article provides answers to the question: What does the process look like to build a virtual simulation/simulator for learning? In addition to the framework, strategies, prompts, and reflective questions have been integrated in the article to support instructional designers to think through theoretical strategies, design principles, stakeholder collaboration, review cycles, and other elements pertaining to virtual simulation/simulator creation and implementation. The article has been written in a way that instructional designers working in different organizations, whether within the field of education or beyond, can benefit from its contents. 

The first section of this article provides a brief review of literature pertaining to simulation and simulators for learning as well as design-based research. The following Methodology section briefly details the methods used in generating the article. The next Conceptual Framework section highlights the simulation/simulator design process via a conceptual framework developed for this study. The fourth section outlines the systematic stages of the conceptual framework and provides guidance to support designers with their own virtual simulation/simulator progress; it also offers instructional designers, herein referred to as “designers,” guidance on working with stakeholders. The final section of the article features a conclusion that reinforces ways designers can engage with the framework.   

Literature Review

Simulations and Simulators for Learning 

Simulation-based learning is an instructional strategy benefiting from digital advancement (Whitworth et al., 2018). One benefit of simulations is practice opportunities (Diwakar et al., 2015), as they require individuals to engage in real-life problem-solving without expert guidance (Chernikova et al., 2020). Providing individuals ample practice opportunities can serve to reduce task complexity, mitigate task confusion, and serve as valuable learning or teaching resources (Grossman et al., 2009; Chernikova et al., 2020). Interactive virtual simulations for performance practice can offer similar traditional hands-on opportunities for learners to engage in learning activities (Wästberg et al., 2019), and it allows individuals to engage in the learning process (Diwakar et al., 2015). Virtual simulations have been used across industries. For example, in the medical education field, educators have leveraged simulators and virtual reality laboratories to mitigate resources (i.e., logistical support, coordinated training times, training site availability, etc.). In other industries, simulators have been used to either recreate low-frequency events (i.e., mechanical failure), dangerous situations (i.e., law enforcement), or time-specific situations (i.e., weather-related conditions) (Carruth, 2017). Virtual simulations can be cost-saving by promising a safe and controlled environment for learners to practice applying critical skills (Carruth, 2017). While research suggests that virtual simulators can provide meaningful learning opportunities by practicing skills and applying knowledge to situations (Grossman et al., 2009), the use and function of virtual simulations vary across fields.

This paper defines the term “simulation” in the context of the educational field. That is, a simulation is defined as a tool that replicates the real-world characteristics of an event or situation (Beaubien & Baker, 2004) that can be manipulated by participants (Jones, et al., 2015; Kaufman & Ireland, 2016; Fink et al., 2021). Conversely, the term “simulator” is defined as a specific type of technological tool (i.e., hardware and/or software) that individuals must interact with either physically or virtually. While research suggests a variety of important elements for simulations, including real-world situations (Davidsson & Verhagen, 2017), genuine interactions (Chernikova et al., 2020), real or virtual objects or persons (Chernikova et al., 2020), environmental settings, whether real-world settings with augmented reality (AR) or virtual settings with virtual reality (VR) (Araujo et al., 2014), and infusing critical thinking and problem-solving elements (Chernikova et al., 2020), less has been documented about how to design and develop a virtual simulation with a simulator for learning or development. There is limited research on designing simulations in relation to instructional strategies (Chernikova et al., 2020), and studies call for more research on design criteria when creating simulations for learning (Cernusca & Mallik, 2017; Badiee & Kaufman, 2015; Koivisto et al., 2018).

Design-Based Research

Design-based research (DBR) is a research methodology that seeks to understand the world and attempt to change it through an interventionist approach (Hoadly & Campos, 2022) by often using a number of iterative phases to craft and refine an intervention (Mafumiko, 2006). DBR is similar to action research, as it usually involves problem identification, assessment, and analysis within a learning context, and the implementation and evaluation of a change or intervention to determine whether the problem was addressed (Lewis, 2015; Plomp, 2013). From an empirical perspective, a DBR study uses its iterative phases as treatments that are re-worked and ultimately linked to some sort of hypothesized outcome (Hoadly & Campos, 2022). A DBR study might look similar to a laboratory experiment, where researchers document a baseline, collect data during the many iterative phases, and generate new, refined versions of an intervention for a particular context (Hoadly & Campos, 2022). However, unlike positivistic experiments that seek to achieve statistical generalizations or causal inference as an outcome, DBR researchers aim to achieve anticipated design outcomes and generate theories which are related to interpretivist traditions (Legg & Hookway, 2020; Hoadly & Campos, 2022).

DBR is an iterative approach that requires many cycles of design and in-situ assessment and evaluation (Ford et al., 2017). Several canonical DBR models have been generated over the years by researchers such as Reeves (2000), McKinney (2001), Wademan (2007), and Mafumiko (2006); the models created by these researchers illustrate design and development processes that mostly follow the same overarching and cyclical phases: 1) Analysis and exploration, 2) Design and construction, and 3) Evaluation and reflection (Sahasrabudhe, Murthy & Iyer, 2012). While the aforementioned researchers’ models follow the same broad phases, each model emphasizes different parts of the DBR process (Sahasrabudhe, Murthy & Iyer, 2012). For example, Reeves’s (2000) model focuses on the refinement of an intervention during every feedback stage but provides little clarity on the research cycles. Wademan’s (2007) model illustrates stakeholders' engagement at different stages but does not include information about participant review sizes. McKenney’s (2001) model provides a detailed number of participants during review cycles but does not mention the number of stakeholders or their engagement in a study. Mafumiko’s (2006) model illustrates various stakeholders and their engagement but does not illustrate design guidelines for creating a prototype (Sahasrabudhe, Murthy & Iyer, 2012). There is goodness in each of the researchers’ canonical DBR models, and this article seeks to adapt segments of each model into a new conceptual framework that offers a holistic design approach to aid instructional designers in building virtual simulations/simulators.

Virtual Simulations and DBR 

Several scholars have used DBR principles and practices to design simulations. In some instances, scholars have argued for the use of DBR to aid in the design and development of authentic learning environments, indicating that simulation-based education (SBE) frameworks are too limiting (Momand et al., 2022; Ivens & Oberle, 2020). In other examples, scholars have reported on the benefits of iterative and cyclical design features of DBR to improve the learning experience and better support the usefulness for practical virtual simulation development (Ivens & Oberle, 2020), theoretical development (Baloyi et al., 2017), methodological alignment, or adjustments to support to dynamic contexts (Hossain et al., 2018). Although studies draw on the benefits of using DBR for simulation design and development, virtual simulation design principles vary across fields and contexts. For instance, virtual simulation design and development in the medical field often focuses uniquely on patient experiences and include recommendations for differing degrees of simulation fidelity (Salimova et al., 2023), whereas virtual aircraft simulations often focus on high fidelity simulations to support tasks such as maneuvering, strategies when experiencing differing visibility, etc. (Gmeiner, 2023). Given the range of contexts, tasks, and fields, designers might benefit from a centralized guide to aid in the determination and selection of key design-based research practices for simulations/simulators. 

Methodology

Conceptual Approach

This article uses a “method theory” approach, a conceptual method introduced by Lukka and Vinnari (2014). Method theory integrates various concepts, streams of literature, and theories (Jaakkola, 2020), leading some to confuse the approach with a systematic literature review (Jaakkola, 2020). What sets this approach apart is its classification of theories or concepts into two areas: 1) a framework for the study, and 2) study data. Four canonical DBR theories constitute the study’s framework, and unlike empirical research, data for this study were drawn from numerous theories and concepts through a process involving the “assimilation and combination of evidence” from external literature (Hirschheim, 2008). 

Theoretical Framework

The theoretical framework for this article is grounded in a set of principles outlined in canonical DBR models created by Reeves (2000), McKinney (2001), Wademan (2007), and Mafumiko (2007). Employing a method theory approach, the deliberate selection of DBR principles from the aforementioned theorists was intentional, as this study sought to propose a strategy that utilizes elements from each of the four DBR models to construct a new framework tailored specifically for simulation and simulator design and development. In other words, the theoretical framework for the study comprises four DBR models (Reeves, 2000; McKinney, 2001; Wademan, 2007; Mafumiko, 2007), while a variety of scholarly theories, concepts, and practices are cited in the study and utilized as its data to inform both the creation of a conceptual framework and practices within each phase of the framework. 

Systematic Literature Review

A systematic literature review was conducted to obtain the data for the study. Several theories and concepts across a range of fields (e.g., business, medicine, etc.) were selected as having the potential to provide practical application for the creation of a framework for the instructional design field. To conduct the review, the researcher adapted Tawfik et al.’s (2019) systematic literature review steps. The steps taken include: 

  1. Conduct a preliminary search. The preliminary search was used to validate ideas within Google Scholar to determine what literature was available regarding DBR, simulations, simulators, and instructional design principles. 

  2. Generate inclusion and exclusion criteria. The inclusion criteria included: 1) any study addressing simulation creation in the context of learning, 2) any study addressing simulator creation in the context of learning, 3) articles addressing the four aforementioned design-based research theorists, 4) articles addressing simulation/simulator design or development and instructional design, 5) English-only articles. The exclusion criteria included: 1) Simulation / simulator studies not within the context of learning, 2) abstract-only articles, 3) articles without full text available, 4) case reports, series, or systematic review studies, 5) non-English articles. 

  3. Use a search strategy. The search strategy included Google Scholar, which contained PubMed, Springer, and several Education journals (e.g., Science & Education, Educational Technology & Society, etc.). To search in Google Scholar, key descriptors were used across several of the concepts. For instance, when searching for studies addressing simulators, key search terms such as “simulator AND learning [2019 or more recent]” and “design OR simulator OR learning” were used. Iterations were made throughout the search process for each key concept. 

  4. Search databases. The researcher searched for literature through Google Scholar, which snowballed to other journals, as discussed earlier. All articles meeting the study criteria were screened for essential information, then downloaded by the researcher.

  5. Screen titles and abstracts. The researcher conducted a brief screen of the titles and abstracts of the articles identified in the initial search process and cross-referenced the search criteria listed in the second step, to decide whether to include or exclude the articles for the study. 

  6. Download full-text and screen. The researcher downloaded the full articles and screened the articles, with the criteria, listed in step two, to aid in deciding whether to include or exclude the information for the study. 

  7. Data extraction and quality assessment. The researcher reviewed each study and examined elements that either related to 1) DBR frameworks from Reeves (2000), McKinney (2001), Wademan (2007), or Mafumiko’s (2007) models, or 2) design principles for simulation, simulator, or instructional design practices. 

In total, 173 articles were identified, and 85 articles were included in this paper. Deviation from Tawfik et al.’s (2019) 13- step process occurred. In particular, only 7- steps were used to collect data for the study. In addition, a qualitative analysis, instead of a statistical analysis one, was utilized to examine the identified articles. This intentional deviation aimed to assimilate and combine evidence from external literature for data collection purposes (Hirschheim, 2008). To perform the qualitative, thematic analysis, Braun and Clarke’s (2019) reflective thematic six-step analysis was used, based on a deductive coding framework generated based on DBR principles (Proudfoot, 2023) and applied top-down to the dataset. The researcher used the deductive codes to tag theories, concepts, or practices from the extracted data. These tags were then grouped into categories, and the categories were further organized into themes. 

Output of Review: Conceptual Framework Artifact

After conducting the literature review and extracting and analyzing the articles, the researcher synthesized key themes. In total, the conceptual framework is made up of 3 core phases. Serving as a theoretical framework, the phases and themes were abstracted based on the codes produced by the DBR principles as outlined by Reeves (2000), McKinney (2001), Wademan (2007), and Mafumiko (2007). Overall, the deductive framework resulted in 3 phases with 7 themes and 20 sub-themes, as presented in Table 1. 

Table 1

Thematic Findings 

DBR PrinciplesThemesSub-Themes
Phase 1. Analysis & ExplorationScopingFront-End Analysis
Plausible Solution Identification
Literature Review
Phase 2. Design & ConstructionMappingDesign Intervention Map
Review Map with Stakeholder
CuratingOutline Design Principles
Curate Real-World Scenarios
DevelopingReview and Refine Collected Data
Design Prototype
Develop Initial Prototype
IteratingConduct First Review
Develop Prototype Two
Conduct Second Review 
Develop Prototype Three
Conduct Third Review
Develop Prototype Four
ImplementingFinalize Prototype
Launch Prototype
Phase 3. Evaluation & ReflectionAnalyzingReflect on Feedback
Identify Design Principles
Iterate Further

Merging the literature review thematic outputs and practitioner experiences resulted in the creation of a systematic conceptual framework artifact. This framework artifact describes the design process for building a simulation/simulator for designers across organizations. The framework leverages the results of the thematic analysis as its organizing structure. Researcher experiences influenced the development of anticipated outcomes for each sub-theme in the framework, as illustrated in the following section. 

Overview of Conceptual Framework

Synthesizing canonical DBR models, a conceptual framework for virtual simulation and simulator design was created (refer to image 1). This model was adapted from four DBR models (Reeves, 2000; McKinney, 2001; Wademan, 2007; Mafumiko, 2007). It integrates practices for instructional designers across industries. The model contains three distinct phases with suggestions and prompts to aid designers in designing and developing simulations/simulators. 

While the conceptual framework for simulation design has a number of similarities to traditional instructional design frameworks, this framework offers specific design-based suggestions to aid designers in their design and development processes. Targeted recommendations may aid designers in making decisions or trade-offs specific to virtual simulations/simulator as they work within the framework. For example, budgeting is a critical indicator of whether or not a simulation/simulator is a viable option (Chernikova et al., 2020). This paper recommends designers engage in budgeting and resource discussions as one of their first actions. Scoping budgeting and resources in regards to virtual simulations is critical to deciding whether or not a virtual simulation or simulator is a feasible learning option. Other examples of recommendations include guidance for when to use virtual simulation/simulator, focus areas to consider when engaging in a literature review for simulation/simulator, simulation content curation strategies, and a simulation design checklist. Overall, each phase and theme embedded within the framework offers targeted guidance, support, tips, or recommendations aimed to support designers with their simulation/simulator design and development. The following details provide a broad review of each phase followed by an image of the conceptual framework:

Phase 1. Within this phase, designers should assess and scope the intervention need and determine whether a simulation/simulator is a viable design solution. Designer actions in this phase include conducting a front-end analysis of real-life problems, identifying plausible design solutions, and conducting a literature review.  

Phase 2. This phase contains the bulk of simulation/simulator creation. Designer actions within this phase include mapping, curating, developing, iterating, and implementing a simulation/simulator, while working closely with stakeholders and expert reviewers.  

Phase 3. Within this phase, designers should reflect on lessons learned and scope future intervention needs. Designer actions should include reflecting on feedback, generating theories or design principles, and identifying if further iterations are needed.

Figure 1 

Conceptual Framework for a Virtual Simulation/Simulator Design Process

Designers should engage with this framework and expect to constantly move between the phases. For instance, a designer might work on elements of Phase 1, such as identifying plausible solutions and reviewing literature, while also considering elements of Phase 3, such as evaluation techniques. Regardless of a designer’s design progress, this model invites and expects designers to move iteratively across each phase of the framework.  

Conceptual Framework: A Systematic Simulation/Simulator Design Process 

Designers will notice the conceptual framework does not include recommended steps. While the framework’s phases represent loosely linear procedures, design and development processes are not always neatly packaged into steps. Consequently, while designers are encouraged to move through each phase via the listed series of actions, they might expect to move back and forth between and across phases, based on their unique needs. That said, the first set of actions in Phase 1 (i.e., front-end and learner analyses) should be conducted before any other actions. After conducting these actions, designers may feel at liberty to move through the conceptual framework, based on their needs. 

Phase 1. Analysis & Exploration

In Phase 1, designers should engage in analyzing and exploring the learning needs to identify whether an intervention is needed. Steps in this analysis include identifying learner, herein referred to as “end-user,” characteristics such as prior knowledge, skills, or abilities, and perceived needs (Brown & Green, 2015; Ambrose et al., 2010; McDonald & West, 2021; Mafumiko, 2006). In addition, a list of experts should be identified by designers to support the intervention design and development process. During this phase, designers should not look for ways they can use a simulation/simulator to address a problem; rather, designers should consider the range of learning interventions available and only select a simulation/simulator if it is deemed the best intervention based on organizational goals, learning objectives, and end-user needs, among other factors.  

Front-End Analysis of Practical Problems

Starting any instructional design project with a front-end analysis allows designers to determine whether a performance gap exists, and, if so, how to close performance gaps with a results-driven solution (Lee & Owns, 2004; Matei & Matei, 2014). Problem identification is critical to determine whether a learning solution is warranted and what might be influencing or contributing to the gap (Raible, 2020; Kaufam & Guerra-Lopez, 2013). Aspects that are part of a front-end analysis include performance, cause, and needs analyses, etc., which provide a structure to identify learner characteristics, understand the problem, and uncover root-cause performance gaps (Dick et al., 2005). 

Identifying the right individuals or groups to collect information from is an important first step in preparing to conduct a front-end analysis. However, in some educational or organizational settings, designers might not always have access to their end-user populations. For instance, designers might be required to design materials for employees or students (i.e., end-users) who have yet to start within an organization. While designers might not always have access to their end-users, designers might have access to former students, current employees or practitioners, internal or external stakeholders, or an organization’s vision, mission, goals, or critical organizational issues (Stefaniak, 2018; Van Tiem et al., 2012). If designers are unable to engage with learners prior to the design and development of an intervention, designers should identify former learners, current employees, managers, stakeholders, practitioners, and any other relevant individuals or groups that may be key information holders to involve in the front-end analysis. 

To conduct a front-end analysis, a variety of instruments can be used or created to collect qualitative and/or quantitative data (e.g., surveys, focus groups, semi-structured interviews, etc.). Examples of quantitative data that designers might want to collect and review, if possible or available, are performance ratings, assessments, exam scores, or new hire/tenured employee performance metrics; numerical information can aid designers in identifying historical trends or performance across student or employee groups. Qualitative inquiry is helpful when trying to learn more about performance contexts or work experiences such as the working environment, task or performance expectations, problem or work-related challenges, or performance needs. Designers should generate a question list and meet with different employee groups (e.g., former students, current employees, managers, stakeholders, practitioners, etc.) to uncover evidence related to the problem and perceived cause(s) of the problem as well as performance gaps (Stefaniak et al., 2020; Chyung, 2008; Harless, 1973). An evaluation of any existing content or materials to identify potential gaps should also be conducted (Vanderhoven et al., 2016); this might include current or past curriculum or materials, job-aids, performance evaluations, etc.

After selecting instruments, items or questions should be created that focus on learning or workplace conditions that the end-users will be expected to operate in, and the tasks that they will be required to perform; identifying answers to these concepts can help designers more accurately diagnose the problem and experiment with initial design solutions. As previously mentioned, key groups (e.g., learners, former students, current employees, managers, or internal and/or external stakeholders, etc.) should be involved in the front-end analysis and asked a variety of questions to uncover information about the problem and potential or perceived performance gaps. The overall outcome of the front-end analysis should provide designers with a better understanding of what the problem is, perceived causes of the problem, goals of the intervention, performance needs, stakeholder performance expectations, and available resource. Refer to Table 2 for more details on example questions to aid front-end analysis with end-users or stakeholders. 

Table 2

Front-End Analysis Questions 

The HPT Model

(Own elaboration based on Van Item, Moseley, & Dessinger, 2004)

Front-End Analysis Questions

(Own elaboration based on Harless, 1973; Dick, Carey & Carey, 2009; Fink, 2003; Herzberg, 1968)
Performance Analysis
  1. Do we have a problem?
  2. Do we have a performance problem?
  3. How will we know when the problem is solved?
  4. What is the performance problem?
  5. How frequently does the task need to be performed?
Cause Analysis
  1. What are the possible causes of the problem?
  2. What evidence bears on each possibility?
  3. What is the probable cause?
Needs Analysis
  1. What is/are the intervention’s instructional goal(s)?
  2. What gap in knowledge or skill does the intervention fulfill?
  3. How does the course integrate with the organizations or institution’s goals?
  4. What are the skills and knowledge that students need to achieve the course’s goal(s)?
  5. What is the nature of the subject?
  6. Are there currently any interventions related to the topic?
  7. What is missing, if anything, with the current interventions?
Task Analysis
  1. What tasks are performed?
  2. How frequently are they performed?
  3. How important is each task?
  4. What knowledge is needed to perform the task?
  5. How difficult is each task?
  6. What kinds of training are available?
Organizational Analysis
  1. What performance policies and expectations are in place?
  2. What type of supervision and support is offered?
  3. What type of interpersonal relationships building opportunities are available?
  4. To what extent is job security available?
  5. What motivational factors are offered (i.e., recognition, growth, advancement, etc.)?
Resource Analysis
  1. What resources are available (technology, team, access to experts, etc.)?
  2. Should we allocate resources to solve the problem?  

Designers can use the questions in Table 2 to aid their front-end analysis efforts; although, it is important to note that front-end analysis questions are not exhaustive. Designers should determine whether new questions or changes to the questions in the Table need to be made, given their own design situ, to best aid them in identifying practical problems. 

Purpose of virtual simulation. Within the context of virtual simulations, designers to determine whether or not the goals of an intervention are to produce work-ready or work-safe end-users (Edgar et al., 2022), practice developing professional identities (Edgar et al., 2022), mitigate resources (Carruth, 2017), or practice within a safe environment to recreate either low-frequency or dangerous situations, such as an aircraft crash or high-risk law-enforcement situation (Carruth, 2017). If the results of the front-end analysis align with any of the aforementioned instances, a virtual simulation might be a valuable learning strategy (Edgar et al., 2022). 

End-User Analysis

While a front-end analysis is critical to better understanding the problem and helping to inform ways to identify plausible solutions, designers should conduct some form of end-user (i.e., learner) analysis, even if it is based on previous trends or aggregated understanding of previous learners. It is important for designers to obtain as much information about their end-users as possible, rather than relying on assumptions (Fulgencio & Asino, 2021). The focus of this type of analysis is to identify potential end-user characteristics, prerequisite knowledge, skills, or abilities, and attitudinal information (Baaki et al., 2017; Dudek & Heiser, 2017).

If designers are able to gain access to end-users, similar qualitative and quantitative data collection strategies and questioning techniques should be used, as in the front-end analysis. The aim of analyzing end-users is to gain details about the learners such as characteristics, prior knowledge, and demographic information. If practitioners, such as trainers or educators, were not involved in the initial front-end analysis, designers should involve these types of individuals in the analysis, unless an organization does not have practitioners or they are unavailable to participate. Importantly, depending on the organization or institution, it may or may not be appropriate to ask for certain demographic information (i.e., ethnicity, age, gender, etc.). Before collecting demographic information, designers must ensure demographic-based questions are approved by human resources or a related governing group. A key question designers should ask when considering whether demographic information is needed is: What demographic information is critical to the intervention design, if any? Designers can refer to Table 3 to review questions for identifying information about end-users and practitioners.

If designers are unable to gain immediate access to their end-users and/or practitioners, designers should review broader employee or student population data, if possible, to help inform about potential end-user characteristics (Baki et al., 2017). For example, reviewing employee information, a designer might decide to split end-users into two groups such as “typical user” and “extreme user” – the designer can decide whether to split the two user groups based on one characteristic (e.g., technological literacy) or two or more characteristics (e.g., competence and performance levels). 

During the end-user analysis, designers should aim to identify as many materials or records as possible related to end-user information. This might include programmatic expectations or job descriptions that list expected learner or employee prerequisite competencies, job duties, etc. 

Table 3

End-User Analysis with Questions

Components of
End-User Analysis
Questions for End-Users
(Learners)
Questions for End-Users (Practitioners)
End-User Characteristics

(Adams Becker et al., 2014; Dick et al., 2009; Jonassen et al., 1999; Fink, 2013)
  1. Who are the learners?
  2. What personal characteristics do these learners possess?
  3. What are the dimensions of the learner?
  4. What contributes to the reason for learning about the topic?
  5. What is the reason for enrolling in the intervention/course?
  6. What is it about the topic that motivates the learner?
  7. How comfortable are learners with technology?

  1. Who are the practitioners?
  2. What beliefs and values does the practitioner have about teaching and learning?
  3. What strengths in teaching does the practitioners have? 
  4. What time commitment can the practitioners give to intervention development?
  5. How familiar with various technologies and delivery modes are the practitioners?
  6. How comfortable are practitioners using technology?
Prior Knowledge

(Ambrose et al., 2010; Cordova et al., 2014; Dochy et al., 2002; Umanath & Marsh, 2014)
  1. What do learners already know?
  2. How might this information contribute to the content and order of what is taught?

  1. What level of knowledge does the practitioner have with the topic/subject?
  2. To what extent are the practitioners comfortable teaching the content/topic area?
  3. What pre-requisite skills are learners expected to have prior to the intervention?
  4. What pre-requisite knowledge are learners expected to have prior to the intervention?
  5. What pre-requisite abilities are learners expected to have prior to the intervention?
Demographics

(Young, 2014)
  1. Where are the learners coming from in terms of their education level, motivation to learn, ethnicity, demographic, geography, culture, hobbies, area of study, educational level?
  1. Where are the learners coming from in terms of their education level, motivation to learn, ethnicity, demographic, geography, culture, hobbies, area of study, educational level?

Designers should use the questions in Table 3 to aid in collecting more details about end-users and practitioners. If end-users are unavailable, but practitioners are available, designers should ask practitioners questions in the end-user (learner) column to gain a sense of prior learner characteristics. It is important to note that the questions in Table 3 are not exhaustive, and designers should determine whether new questions or changes to the questions in the table need to be made, given their own design needs. 

Overall, designers should synthesize the information from the front-end and end-user analyses to determine whether a learning solution is warranted. If a learning solution is warranted, designers should pay special attention to identifying the purpose of the organizational goals, scope of the need, intended end-user needs, challenges end-users might face, and the desired outcome(s) of the intervention (Raible, 2020). 

End-users and virtual simulation. Within the context of simulations, prior learning knowledge can be a key indicator for simulation education. In particular, end-users who are already familiar with theoretical concepts might benefit more from simulation education compared to learners who have less awareness of theoretical concepts, as a simulation could influence cognitive overlap for problem solving without knowledge of higher order constructs (Kirschner et al., 2006). Other scholars argue early simulation learning might benefit end-users when attempting to gain or restructure higher order concepts (Boshuizen & Schmidt, 2008). Regardless, depending on the theoretical knowledge of end-users, those with less theoretical knowledge prior to simulation engagement might require more instructional guidance compared to advanced learners with theoretical knowledge (Schmidt et al., 2007). In addition, if a designer is considering the possibility of using a simulation technology, such as virtual reality (VR), designers should be aware of potential end-user sensitivities to head-mounted displays (HMD), history of motion sickness, or reluctance to computers or newer technologies (Baniasadi et al., 2020). End-user characteristics and needs should be given careful and thoughtful attention when considering simulations and simulators; designers should use gathered data to make decisions that reflect end-user characteristics, knowledge, and where possible, demographics.     

Identify Plausible Solution(s)

Based on the results of the front-end analysis, designers should identify one or more initial intervention solutions to meet the needs of the organization and end-users. Designers should illustrate how the identified interventions are plausible solutions that address the identified problem(s), organizational goal(s), and how they might aid in facilitating performance change (Van Tiem et al., 2012). Per Wademan’s (2007) model, this process should be iterative and conducted in collaboration with experts and practitioners (Plomp & Nieveen, 2007). When determining plausible solutions, especially virtual ones, a number of factors must be considered by designers, such as technology accessibility (i.e., hardware and/or software), end-user and practitioner access to required technology, other resource availability, etc. 

A virtual simulation and simulator might be appropriate in situations where there is a low-frequency of events, when resources need to be mitigated, when use-cases are time-specific (Carruth, 2017), or when there is a need to prepare for on-the-job performance because incorrect performance on the job could lead to serious consequences (Baker & Jenney, 2023; Chernikova et al., 2020). Virtual simulations and simulators are also viable options if learners need to engage in real-world situations with genuine interactions and real or virtual objects or persons (Baker & Jenney, 2023; Chernikova et al., 2020).

At this point, an assumption is made that a designer has identified a virtual simulation as a viable solution to address a performance problem. Possible virtual simulation solutions might be based on scope (i.e., narrow vs. comprehensive simulation), technology (i.e., completely virtual or hybrid simulation experience, etc.), or a different factor altogether. After selecting possible virtual simulation options, designers should generate a list of the pros and cons of each intervention by engaging in a type of risk assessment to determine what intervention might be best. Questions such as: What plausible risks are associated with each virtual simulation intervention? What are the strengths of each intervention? Are there any other alternative options that might be a better intervention, given the risks identified?  

 Designers should work closely with stakeholders to collaboratively decide on whether a simulation/simulator should be created in lieu of a different intervention option. Table 4 offers questions to aid designers in thinking through design solutions. 

Table 4

Initial Design Solution Questions

Design Solution Questions(Own elaboration based on Vafa, 2013; Stefaniak, 2020; adapted for broader organization setting)
Initial Design Questions
  1. What constraints are present?
  2. What type of intervention should be most valuable, given the end-users needs and business / organizational needs?
  3. What modality of intervention options are available? And, what modality of intervention would be most valuable, given the needs of the intervention?
  4. What resources are available to build this intervention?
  5. What is the timeline for this intervention?
  6. What type of interventions can realistically be created, given the timeline?
Identifying Intervention Goals
  1. What is long-term knowledge or skill(s) that your end-users need? 
  2. What broad knowledge or skills should your end-users be able to do or perform? 
  3. What action verb(s) most clearly align with the performance needed by end-users, after the intervention is completed?
  4. Do the performance goals align with the operational or business needs? 

Like the previous tables, the questions listed in Table 4 are not exhaustive. Designers can use the questions in the table, make changes to the questions, or generate new questions to meet their needs. Regardless of the questions used in this process, the end-result of this process should lead designers to select a viable intervention and prepare them to move onto Phase 2. After an agreement is reached on the initial design solution, designers should ensure they engage with a project manager to support further alignment, communications, and project management, or engage in these facets themselves (Wiley, 2018).  

Budgeting for technology with virtual simulation. A related part of initial design solution selection is examining resources and potential project constraints. During this process, designers should identify tools, resources, timelines, and the available project budget (Wiley, 2018). Budgeting or tool constraints are often critical indicators of whether a simulation/simulator is a viable option (Chernikova et al., 2020). For instances, the use of virtual reality (VR) might be beneficial in certain settings, but to option high-quality hardware (e.g., efficient graphics cards, accurate tracking systems, high-resolution-displays, etc.), can yield to a high cost design and implementation, which could make the intervention too expensive for some (Baniasadi, 2020). Consequently, designers should scope their access to simulator technology and develop a plan for resource needs to ensure all aspects of their design solution are available before a project is launched. If tools, resources, or necessary aspects for a project are available, designers should work closely with stakeholders to ensure a common understanding of the plausible solution (i.e., virtual simulation) as well as goals to ensure agreement with the initial design solution (Wiley, 2018). If required aspects of a project are not available, designers should negotiate project needs with stakeholders or make adjustments to the initially selected intervention.  

Review Literature 

Once a plausible intervention is identified (i.e., virtual simulation), a focused literature review should be conducted to select appropriate learning theories to use within the intervention (Mufumiko, 2007; Sahasrabudhe, Murthy & Iyer, 2012). For example, consider a designer who is interested in productive failure as a learning theory for virtual simulations. This designer should review external literature to better understand the core components of the theory of productive failure, such as the knowledge and exploration phases (Kupar, 2008; 2016), as well as potential limitations with the learning theory (i.e., the degree of end-user comfort with failure or repeated failure) (Juul et al., 2013). In this example, the designer should also examine literature to determine whether previous DBR or relevant studies have created a simulation using productive failure as a learning theory to identify lessons learned, strategies, design principles, or other relevant practices. 

In addition to learning theories, principles related to virtual simulations/simulators should be identified. Designers should look for literature addressing simulation/simulator interventions and explore design principles (e.g., type, technology, interaction, duration, etc.) to aid in developing the intervention (Makransky & Petersen, 2021; Chernikova et al., 2020). Designers should examine literature addressing the impacts of learning on cognitive, emotional, and behavioral processes vis-à-vis simulations, where possible (Juul et al., 2013). Designers might consider the environmental setting of simulations. For example, virtual simulations can be completely immersive in a VR or HMD environment, or include a mixture of role-plays while using a simulator to complete a series of actions or tasks (Makransky & Petersen, 2021; Chernikova et al., 2020); there are a number of options for simulation design. Table 5 provides a list of questions to aid designers look for specific elements pertaining to learning theories and simulation/simulator design principles when searching for external literature. 

Table 5

Questions to Explore During Literature Review

Literature Review Questions
Learning Theories 
(Own elaboration based on Mufumiko, 2007; Sahasrabudhe, Murthy & Iyer, 2012)
  1. How does learning occur best in virtual simulation/simulator environments?
  2. What factors influence learning in virtual simulation/simulator environments? 
  3. What learning theories have designers used when developing virtual simulations/simulators?
  4. What learning theories are most applicable for virtual simulation/simulators?
  5. What underlying theories have been used for virtual simulation/simulators to aid in organizational training? 
Design Principles for Simulation/Simulator(Own elaboration based on Chernikova et al., 2020)
  1. What design principles have been tested/recommended to create virtual simulations/simulators?
  2. What types of learning modalities are recommended for virtual simulation/simulators?
  3. What are suggested minimum and maximum duration for virtual simulations/simulator learning? 
  4. What settings / environments might a virtual simulation/simulator be most beneficial for leaders?
  5. What types of technology have designers used to produce virtual simulations/simulators? 
  6. What are different ways learners can interact with virtual simulations/simulators? What factors might influence different interaction design choices?
  7. When is guidance or instructional direction useful for learning in virtual simulation/simulators?  

Designers can use these questions to guide their research process, as needed. Overall, a literature review is key to aiding designers in considering design, learning, and simulation principles. Designers should make every effort to ensure learning theories and design principles for simulation/simulator align to the practical problems identified during the front-end and end-user analyses. The result of the literature review process should aid designers select at least one learning theory to use as a framework for designing and developing a virtual simulation/simulator, as well as several virtual simulation/simulator design principles. 

Phase 2. Design & Construction

The information in Phase 2 is written based on the assumption that a designer has selected a virtual simulation (i.e., scenario) and simulator (i.e., technological tool used for interaction in simulation) as a design solution. There are several important aspects related to the design and construction of a simulation/simulator that designers should contemplate in this phase, including initial design elements, prototype development, expert-review cycles, and the number of iterations required before implementation. This phase is the largest and most labor-intensive of the phases in the framework. There are multiple outputs of this phase including an outline of intervention, expert reviews used to develop the simulation, curated use-cases, and a simulation/simulator artifact.  

Design an Intervention Map 

Intervention or curriculum mapping is a strategy used to design and link outcomes with relevant learning materials (O’Rourke et al., 2019; Ambrose et al., 2010; Harden, 2001). Creating a high-level intervention outline that focuses on mapping the overall intervention goals and specific learning outcomes (LOs), as well as intervention materials, can make proposed design solutions transparent for stakeholders, management, and experts. A well created intervention map should integrate learner needs and connect the LOs, assessment, activities, and instructional materials back to the organizational goals (Ambrose et al., 2010; Harden, 2001). The learning objectives should directly address the performance gaps identified in the prior analyses (O’Rourke et al., 2019). The focus of the map should include details on the virtual simulation and relevant instructional materials, activities, and assessments related to the simulation/simulator. Refer to Table 6 for more details on designing intervention or curriculum maps.

Table 6

Intervention Map Template 

Intervention Map with Alignment(Perez, 2020; O’Rourke et al., 2019; Ambrose et al., 2010)
Organization
Goals
Learning Objectives
(LOs)
AssessmentActivitiesInstructional
Materials
Goal 1Objective 1 [Describe assessment and include relevant resources and technology needed][Describe learning activities and include relevant resources and technology needed][Describe instructional materials and include relevant and technology resources needed]
Goal 2Objective 2 [Describe assessment and include relevant resources and technology needed][Describe learning activities and include relevant resources and technology needed][Describe instructional materials and include relevant and technology resources needed]
Objective 3[Describe assessment and include relevant resources and technology needed][Describe learning activities and include relevant resources and technology needed][Describe instructional materials and include relevant and technology resources needed]
Goal 3 Objective 4[Describe assessment and include relevant resources and technology needed][Describe learning activities and include relevant resources and technology needed][Describe instructional materials and include relevant and technology resources needed]

Designers can use this map or create their own version. Regardless of the format, it is essential that the organizational goal(s) and learning objectives are present in any map. Designers should reference the results of the front-end analysis to identify organizational goals. Each relevant column should be filled with enough details that will make the map easy to understand and follow, especially for laypersons, who might be stakeholders, managers, staff, etc. Stakeholders and other map reviewers will benefit from clear and concise details with terms that are spelled out or easy to understand. The results of this activity should include a visual outline of the virtual simulation/simulator intervention with horizontal alignment across each row or element of a designer’s map. 

Complete a Review Cycle with Stakeholders 

Once the map has been filled in and completed, designers should conduct a stakeholder review (i.e., management, experts, other stakeholders). Stakeholder review sessions are an important element to any type of intervention. The point of the review may vary, depending on the need or function (Mufumiko, 2006). For instance, the first review might be used to gain approval to move forward with the project, or it might be used to gain feedback before another review session. Irrespective of purpose, providing intervention transparency early in the design process can help stakeholders gain a clearer picture of the initial design solution (Harden, 2001) and ensure stakeholders are aligned with the solution (Tran et al., 2021). Gaining alignment earlier in the process can save designer time in the later stages of Phase 2.

Whether it is the first or last review, relevant feedback should be integrated by designers or at least considered before moving forward with prototype creation (Tran et al., 2021). If stakeholders do not approve the intervention map, designers must work closely with stakeholders to determine what aspects of the intervention stakeholders have concerns about. Designers must negotiate with stakeholders to reach a solution; this could mean making minor edits (i.e., changing an assessment) or major revisions (i.e., re-starting the front-end analysis) to the proposed intervention. 

Outline Design Principles and Functions 

To ensure the identified design principles and theories are integrated into the virtual simulation, an outline of strategies for use and function should be created prior to building the intervention. Once designers have received approval to move forward with a simulation/simulator, they should re-review the external literature reviewed in Phase 1. In this activity, designers should select, outline, and explain what type of theoretical and/or learning principles they will use within the simulation/simulator. Designers can use Table 7 to aid in mapping and aligning their final decisions for learning theories, principles, and relevant materials for the virtual simulation/simulator.

Table 7

Mapping Design Principles 

Theory
(Vanderhoven et al., 2016)
PrinciplesApplication to Materials
Theoretical or learning concept 1Attributed design principle 1 [Explanation of strategy or use of principle in the intervention]. 
Attributed design principal 2[Explanation of strategy or use of principle in the intervention]. 
Attributed design principal 3[Explanation of strategy or use of principle in the intervention]. 
Theoretical or learning concept 2Attributed design principal 1[Explanation of strategy or use of principle in the intervention]. 
Attributed design principal 2[Explanation of strategy or use of principle in the intervention]. 
Simulation/simulator design concept 1Attributed design principal 1[Explanation of strategy or use of principle in the intervention]. 
Attributed design principal 2[Explanation of strategy or use of principle in the intervention]. 
Simulation/simulator design concept 2Attributed design principal 1[Explanation of strategy or use of principle in the intervention]. 
Attributed design principal 2[Explanation of strategy or use of principle in the intervention]. 

This map can serve as a referral tool for designers to constantly refer back to, as they build their own virtual simulation. Designers might reduce or add to the map, based on their own design needs. Stakeholders may be interested in an outline or map as illustrated in Table 7; given stakeholder characteristics (e.g., time, interest, etc.), designers should determine whether or not to include a map within a stakeholder review session. 

Curate Real-World Scenarios 

Real-world scenarios, also known as use-cases, are an important component of simulation design (Chernikova et al., 2020). Former end-users or experts (e.g., former students, current employees, practitioners, managers, etc.) can be invaluable resources to curate real-world examples and should be leveraged to obtain simulation scenarios. Curating real-world scenarios is a key building block to simulation development. A well-designed scenario might include a script, simulation technical document, and equipment needs with support from individuals such as directors or writers, production staff, simulation technicians, or visual designers such as visual effects (Harrington & Simon, 2022). Table 8 offers strategies to aid designers curate real-world scenarios.

Table 8

Curation of Real-World Scenarios

Real-World Scenario Curation(Own elaboration based on Pandey, 2019; Pappas, 2016)
Stages Details
AlignDesigners should start the real-world content curation process by referencing the results of the front-end analysis, end-user analysis, and intervention map. Focusing on the goals and objectives of the intervention are critical to aligned content curation. 
IdentifyDesigners can curate stories, scenarios, or use-case content from multiple sources, but it is recommended that reliable sources (i.e., experts, former students, tenured employees, or managers) be used as the primary curation point, where possible. Designers should focus on gathering information based on an actual situation as well as the actions that end-users should take to complete, improve, or change a given situation. Qualitative approaches and questioning techniques are ideal for content curation. Designers should aim to collect information from reliable sources that address the goals and needs of the simulation (i.e., legal requirements, life-threatening situation, critical procedures, etc.) as well as contextual stories or experiences to help shape a real-life scenario for learners (i.e., experiences, tips, best practices, stories about the most challenging and important situations, etc.). Designers might not only collect text-based information but also images, videos, or other relevant multimodal information from reliable sources to aid designers get a sense of visual, experiential, and tactile information. Designers should collect as much information as possible, until they reach the point that they perceive that enough unique or repeated information has been collected. 
DistillAfter collecting the data, designers should work closely with experts to distill the data down to the most relevant and valuable content for end-users. This process might be time-consuming, but it is important to ensure end-users are provided with essential details. During this stage, designers should work with experts to not only identify macro-level aspects of a scenario but micro-level aspects of a scenario, such as specific skills and tasks embedded in the scenario. 
ContextualizeDesigners should also work with experts to ensure the necessary context is embedded into the content. This might require some content to be transformed to ensure the right information is provided to end-users. Contextualization is critical, as it can aid in making sure the content is rooted in real-world scenarios. 
IntegrateOnce the content is identified, designers should generate a plan to integrate the information into the simulation. Integration can take on multiple forms (i.e., simulation journey, problem prompt, etc.). 

The process of writing a simulation scenario should be systematic, and align to the intended simulation objectives (Harrington & Simon, 2022). Designers might work towards scenario integration by writing the story or scenario on paper or digital storyboarding or by working with a director or writer to build out the scenario. Stakeholders or the individual responsible for providing the scenario might use to review the scenario or story and check for accuracy and relevancy. Designers should be responsible for assessing the scenario in relation to the learning objectives. Once the scenario is completed, designers might work with production staff, simulation technicians, or visual designers, if available, to build the scenario into the simulator. In addition to developing the actual scenario, designers might also consider creating a brief introduction to the simulation to orientate end-users to the simulation environment including the expectations for performance, environment details, equipment, support, and whether or not the simulation is for educational or assessment purposes (Harrington & Simon, 2022).

If a simulation is being built for globally-based end-users, designers should identify and work with either practitioners or experts within each region or country to aid in sourcing scenario content to align scenarios with end-users’ culture, language, policies, and/or practices. For example, a designer is building a driving simulation/simulator for a large construction vehicle organization. Within the simulation, the on-screen equipment is the same across four countries, but the drivers are required to practice unique traffic laws within each country. If the focus of the simulation is to practice driving, the simulation might need to be designed differently for drivers in one country compared to the next, based on traffic law differences. Designers should engage with practitioners or experts within each country to obtain unique details pertaining to the country’s traffic laws. While traffic laws may be unique, designers might consider selecting and integrating globally relevant real-world scenarios into a simulation, where possible. 

Curating real-world scenarios is important to a well-built simulation. Designers should pay special attention to collecting the “right” scenarios. Once macro-level aspects (e.g., story, experience, or use-case) are finalized by experts, designers should pay special attention to micro-level aspects, such as specific tasks or actions that they should engage with or be required to consider when participating in the simulation. Designers should take time to document a range of tasks and actions identified from the curated content; they should also involve experts to ensure identified tasks and actions are correct and rooted in practical and real situations that learners might face outside the learning environment.   

Develop Initial Prototype 

After generating a curriculum map, corroborating design principles, and curating use-cases, designers should begin to create a design and develop a simulation/simulator prototype. Several important elements for prototype creation include the following:

Review and reference data for design. Based on the data collected during the first and second phases, designers should reference their intervention and design principal maps, ensuring the goals of the intervention, the objectives of the simulation, desired learning theories and design principles, real-world scenario content, and required tasks or actions are integrated into the simulation. Designers should also reference the desired simulation principles (i.e., duration, authenticity, tools, etc.) that were previously selected. All decisions when building the simulation should be informed by 1/ front-end analysis data, 2/ defined problems, and 3/ curated scenarios, where applicable. 

Design/develop initial simulation prototype. There are a number of different ways designers can go about designing and developing a simulation. This article does not present an exhaustive list of ways to build a prototype, but it offers guidance to designers on core elements of the design and development process. In terms of design, there are several key parts including the scenario(s), constraints, and learner skills or actions. 

When designing a simulation, designers should have access to at least one or more curated real-world scenarios. Not all simulations need to be high-fidelity (i.e., closely mirroring real-life), but simulations are ideal when they showcase real-life situations (Chernikova et al., 2020), whether by using a story to frame a problem, presenting a scenario in which learners need to identify a problem, assess a situation, interact with actors, react appropriately to a challenging environment, provide care, complete a required task, or the like (Baker & Jenney, 2023). Well-designed scenarios might be based on auditory, text-based, visual (i.e., moulage), or a mixture of multimodal elements. Designers should ensure virtual simulation scenarios guide or direct learners to complete the desired objectives (Harrington & Simon, 2022). To build the real-world scenario, designers might need to create scripts, create digital actors or obtain real actors, or generate some type of visual experience. Importantly, the simulation scenario should systematically align to the business goals and learning objectives. Designers should work backwards with the organizational goals in mind and learning objectives and ensure the simulation scenario and desired learner actions or tasks align to the goals and objectives of the overall intervention. 

After selecting a scenario and deciding on what the scene or situation will be, designers should identify intended constraints such as what aspects of the real-world scenario should be included and what information, details, or actions should be left-out. In particular, designers should decide what elements of the work or learning environment are most pertinent for the simulation, given the intervention goals and objectives (Hill, 2023). Designers should also decide whether breaks or a running event would be most appropriate for the simulation. 

Once the constraints of the simulation have been defined, the designer should create specific learning tasks or activities that end-users are required to perform in order to complete the simulation; the tasks or activities are the core events that make up the simulated environment, and it is critical for designers to complete a simulation that has a certain degree of difficulty, to allow learners to learn and grow through the experience (Hill, 2023).  

Once the scenario(s), constraints, and skills have been defined, a designer should construct the simulation in detail. For a prototype, it is recommended that the initial learning experience be storyboarded and outlined in a document, preferred tool, or web-based software that can show-case a prototype but not require development of the complete simulation (Harrington & Simon, 2022; Plomp & Nieveen, 2007). Basic prototypes are ideal because designers can share their vision with stakeholders and/or experts and receive quick feedback to aid in the development process. Alternatively, complex simulations often contain a number of elements, materials, learner interactions, and actions or tasks. Leveraging experts to help define boundaries, test scenarios, and aid with design aspects through initial storyboarding can serve to help designers to ensure they are on the right track and make quick changes before investing too much time and energy into development work that may need to be reworked during a later stage.  

Design/develop simulator initial prototype. Building with a simulator is not always necessary during the initial storyboarding work, but the specific type of technological tool (i.e., hardware and/or software) that end-users are expected to interact with, either physically or virtually (Baker & Jenney, 2023; Chernikova et al., 2020), should be referenced in the storyboard or early prototype. Details regarding the technological tool use and need should also be referenced, such as whether the virtual simulation will require a screen, physical actor (e.g., mannequin), or some other form of technology (Baker & Jenney, 2023; Chernikova et al., 2020). Virtual simulators can be low or high fidelity, depending on the degree of authenticity required (Chernikova et al., 2020), and early simulator development should test the degree of fidelity required to identify whether aspects of realism within the simulator are distracting or not (Hill, 2023). 

Options for learning theory or design principle integration should be explored with a virtual simulator. For example, a designer decides to use productive failure as a learning theory within a simulation. When integrating this theory into a simulator, a designer might consider whether the available technology can use a virtual agent to present guidance or feedback to end-users as compared to a facilitator (Chernikova et al., 2020). To support designers with initial prototype development, Table 9 details important elements to consider in the design and development of a simulation and simulator intervention. 

Table 9. 

Design Process for Simulation and Simulator Creation

Simulation Design (Own elaboration based on Tamim et al., 2011; Chernikova et al., 2020)
ComponentContentDegree of Fidelity (Low or High)
Learning Outcomes[Insert LO(s)]
Required End-User Tasks or Actions[Define complex skills or actions related to learning outcomes (e.g., communication skills, troubleshooting]
Intervention[Insert intervention [e.g., simulation]
Type of Simulation[Define context and interaction (e.g., real or virtual object and real or virtual person)] 
Type of Simulated Environment[Briefly detail the nature of the simulated situation and environment involved and describe the extent to which it represents actual practice]
Creation Techniques[Insert learning theories; design principles]
Instructional Strategy[Insert strategy (e.g., role play, virtual reality)]
Use-Case/Real-World Examples/ Scenarios[Insert curated use-case from expert(s)]
Duration of
Simulation
[Insert duration and consider alignment with use-case timeframe (e.g., 1 hour, 1 day, etc.)]
Role of Facilitator(s)[Define role of facilitator in simulation]
Role of End-User[Define role of end-user in simulation]
Simulator Design(Own elaboration based on Tamim et al., 2011; Chernikova et al., 2020)
ComponentContentDegree of Fidelity (Low or High)
Function of Simulator[Define type of interaction needed to complete simulation; physical or virtual interaction with people or objectives]
Required End-User Tasks or Actions[Define complex skills or actions related to learning outcomes that will be practiced with the simulator (e.g., communication skills, troubleshooting]
Technology[Insert technology for simulator creation (e.g., LMS, software, hardware)]
Function of Guidance[Describe if guidance will be integrated into the simulator and define the type of guidance (e.g., practitioner or virtual agent)
Timing of Simulator[Determine when simulator should be utilized within simulation (e.g., beginning, end, throughout)
Duration of Simulation[Insert duration of simulator vis-à-vis simulation, also consider alignment with use-case timeframe (e.g., 1 hour, 1 day, etc.)]

Quality criteria. Quality criteria are important in the prototype design process. There are three criteria for prototype design: validity, practicality, and effectiveness (Nieveen, 1999; Mafumiko, 2006). Validity is defined by three areas: content validity (i.e., prototype based on knowledge), construct validity (i.e., linking of components in prototype), and task validity (i.e., prototype contains real-world practice) (Mafumiko, 2006). Practicality is defined as the degree of usability of the prototype, while effectiveness is defined as the degree of alignment with the intended learning outcomes (Mafumiko, 2006). Arguably, all three elements should be used to support the development process. Designers and stakeholders can use Table 10 to review and determine simulation/simulator quality, based on the aforementioned criteria listed in Table 9.  

Table 10

Quality Criteria

Validity(Own elaboration based on Mafumiko, 2006)
TopicDefinitionMet or Not Met
Content ValidityPrototype is based on previously established knowledge. 
Construct ValidityThe components within the prototype link together.
Task ValidityPrototype contains real-world practice opportunities. 
Practicality(Own elaboration based on Mafumiko, 2006)
TopicDefinitionMet or Not Met
Degree of UsabilityThe extent to which the prototype is usable. 
Effectiveness(Own elaboration based on Mafumiko, 2006)
TopicDefinitionMet or Not Met
Degree of AlignmentThe degree of alignment between the intended learning outcomes and actualized outcomes of the learning experience. 

Develop Simulation/Simulator

Once initial prototype reviews are conducted with experts and/or stakeholders, designers should integrate feedback and pivot from storyboarding or a basic prototype to creating the simulation using the simulator. In this instance, the simulation will likely increase in complexity, and additional aspects such as learning assessment, on-the-job applications, or other related activities might need to be developed and included within the simulation/simulator or outside the simulation experience, depending on designer decisions, stakeholder requests, or expert recommendations. Once the first version of the simulation is completed via the simulator, designers should engage several rounds of review and revision.

Conduct Iterative Review Cycles with Experts and Small Group of End-Users 

Reviews and feedback sessions are one of the most important aspects of prototype development. Designers should schedule review sessions with experts, stakeholders, and end-users (Tessmer, 2013) well in advance. Designers might benefit from creating work-back plans, where designers plan their development work based on a review due-date. There are several different review strategies, and it is recommended that designers integrate multiple groups into the review and feedback process to aid in prototype iteration and development.  

Iterative cycles and feedback. Consistent with DBR research, the designed prototype should complete a series of cyclical iterations with different user groups (Reeves, 2000; Wademan, 20007; Mafumiko, 2006). Following Mafumiko’s (2006) model, several iterative reviews should take place with experts and other relevant groups when designing a new prototype. Given the fluid nature of DBR, review cycles are expected to have a degree of flexibility and evolution (Kennedy-Clark, 2013). Documentation and integration of feedback is critical in the review cycles and prototype iterations, and the number of prototypes may differ depending on project duration and timelines (Tessmer, 2013). Regardless, detailing changes with each phase is critical, as designers should be able to showcase the results of iterative evolution to stakeholders after the completion of all review cycles (Vanderhoven et al., 2016). Once reviews are completed, designers should finalize the prototype for larger scale implementation. Table 11 depicts a process for highlighting the evolution of prototypes, based on expert/end-user reviews. 

Table 11

Evaluation and Prototype Interaction Visual Representation

Review Cycle(Own elaboration based on Vanderhoven et al., 2016; Mafumiko, 2006)
Prototype 1
(Review):
Prototype 2
(Revision 1):
Prototype 3
(Revision 2):
Prototype 4
(Revision 3):
Prototype 5
(Revision 4):
  • Number of Experts:
  • Areas of Expertise:
  • Number of End-Users (if any):
  • Number of Experts:
  • Areas of Expertise:
  • Number of End-Users (if any):
  • Number of Experts:
  • Areas of Expertise:
  • Number of End-Users (if any):
  • Number of Experts:
  • Areas of Expertise:
  • Number of End-Users (if any):
Release to Learners
[Provide a high-level overview of elements included in the first simulation][Provide a high-level overview of elements and list of revisions made from first simulation]  [Provide a high-level overview of elements and list of revisions made from second simulation]  [Provide a high-level overview of elements and list of revisions made from third simulation]  [Provide a high-level overview of end-product and list of revisions made from fourth simulation]  
[Provide a high-level overview of elements included in the first simulator][Provide a high-level overview of elements and list of revisions made from first simulation]  [Provide a high-level overview of elements and list of revisions made from second simulation]  [Provide a high-level overview of elements and list of revisions made from third simulation]  [Provide a high-level overview of end-product and list of revisions made from fourth simulator]  

Designers can use Table 11 as a review cycle guide to aid them in scheduling and tracking simulation/simulator reviews and feedback. It is important to note that the Prototype 1 review should not be confused with the initial storyboard prototype, which was conducted at an earlier stage of the design and development process. 

Review Cycle Considerations

While there is no right or wrong number of reviews, designers should engage in at least a minimum of two or more prototype reviews before implementing the final intervention. When integrating review cycles into the instructional design process, designers should determine the type and number of expert reviews needed and ways to collect data from expert review sessions. The following sections provide more insight into the review process and elements. 

Review type and amount. The amount and type of experts might vary. One DBR study suggested including a variety of different kinds of users/experts (Sahasrabudhe, Murthy & Iyer, 2012). A strategy to determine the type of experts is to consider the prototype evaluation criteria: content validity, construct validity, task validity, degree of practicality, and degree of effectiveness (Dick et al, 2015; Calhoun et al., 2021). Identifying individuals for both the virtual simulation (scenario) and simulator (technological tool) based on content or topic expertise (Dick et al., 2015); process, action, or task expertise; usability (UX) expertise (Krug, 2014), practical and real-world situation expertise, media or technological (hardware/software) expertise, and instructional (ID)/learning experience (LX) design expertise might be beneficial in the review cycles. Designers should consider whether internal or external stakeholders should be incorporated into the review. 

Three experts, as recommended by Mafumiko (2007) may or may not cover the range of knowledge and skills needed to provide feedback based on the aforementioned items. Documenting the type of expertise involved in each review and the number of experts is valuable for increased transparency and better identifying the function and feedback of experts (Sahasrabudhe, Murthy & Iyer, 2012). Similarly, conducting reviews with end-users is a popular practice in DBR studies, and the number of end-users within each or specific review cycles may depend on the project scope and access to participants (Calhoun et al., 2021; McKenney, 2001; Mafumiko, 2007; Plomp & Nieveen, 2007). Designers should consider inviting a small group of end-user participants to try out portions or the entire simulation (Tessmer, 2013; Plomp & Nieveen, 2007). If end-users are not available, designers might leverage current employees, previous learners, or managers; the number of participants in the review cycles should be detailed (Sahasrabudhe, Murthy & Iyer, 2012) and shared with stakeholders.

Review delivery strategies. Designers should consider how review or evaluation cycles can increase the quality of a prototype (Mafumiko, 2006; Sahasrabudhe, Murthy & Iyer, 2012). These cycles might be conducted asynchronously (e.g., surveys, questionnaires, diary, etc.) or synchronously (e.g., live review, interview, observation, etc.), or individually or in groups, depending on nature and scope of simulation and simulator and availability of experts and end-users as well as stakeholder preference (Plomp & Nieveen, 2007). Designers should invite experts purposefully and determine the number of expert participants needed in relation to the number of desired review cycles (Tessmer, 2013; Plomp & Nieveen, 2007). 

Implementation

Naturally, after each review cycle, designers should iterate and make changes, where appropriate and relevant, to the prototype. After the desired number of review sessions and iterations are implemented, designers should prepare the prototype for launch (Calhoun, 2021; Tessmer, 2013). This might include working with stakeholders, training teams, managers, etc. to ensure the prototype is ready to be implemented within a workforce. Designers should also consider ways to obtain data to best evaluate the prototype, whether through surveys, focus groups, or the like. Evaluation documents and materials should be finalized before the implementation of the simulation/simulator, so data can be captured as soon as possible. 

Phase 3. Evaluation & Reflection

The final phrase, Phase 3, of the conceptual framework focuses on the review and reflection of learner data, the selection of key design principles from the design and development process, and the decisions on whether to further iterate the simulation and/or simulator. 

Analyze and Reflect on Feedback from Users 

Reflection, post-intervention implementation, invites designers to engage in critical research and data analysis (Reeves, 2000) by selecting an evaluation model (Calhoun, 2021). Designers should take advantage of data collection post-intervention implementation to explore how the intervention worked with an increase in participants. Quantitative and/or qualitative data (e.g., formative/summative assessments, final exams, satisfaction surveys, pre/post-tests, etc.) might be collected and analyzed after several weeks or months after the end-user experience, depending on the designer’s selected evaluation methodology. There are a number of employee performance and learning evaluation models used across organizations and industries. The Kirkpatrick evaluation model is, arguably, one of the more popular models within organizations (Peck, 2019; Calhoun, 2021). Designers should select an evaluation model that allows them to measure or assess the organizational goals and simulation learning objectives. After gathering, analyzing, and reflecting on the feedback and lessons learned, designers should create a report that includes key evaluation metrics or details selected by the designer to aid stakeholders identify the value of the intervention. In some instances, organizations require data to justify an intervention’s return on investment (Leone, 2020).  

Identify Design Principles 

Making a theoretical contribution is a fundamental part of several DBR models. Examples of theoretical contributions include the creation of design guidelines (Reeves, 2000), DBR models (Wademan, 2007), design principles (McKenney, 2001), or a final project with empirical data (Mafumiko, 2007). If designers choose not to generate a theoretical contribution externally, designers are encouraged to share their findings internally. One way to do so is for a designer to create a report that illustrates one or more theoretical contributions and share the report with relevant teams or individuals within their organization. Sharing findings, lessons learned, or strategies for the design and development of a simulation/simulator can benefit individuals internally or externally.  

Iterate Further

Based on the feedback received, further simulation/simulator iterations may or may not be necessary. Designers should consider the types of feedback received (i.e., extensive, moderate, or small revisions suggested) and cross-reference project resources and timelines, as well as the severity and urgency of feedback integration. Other types of iterations in this stage could also be related to language translation, globalization, or prototype expansion. Designers should determine the extent of resources, severity of feedback, or the extent to which additional simulation/simulator iterations may be immediately required or planned in the future.  

Conclusion

Simulation-based learning can be a valuable tool for learners (Whitworth et al., 2018), and with a lack of literature illustrating design and development processes, this article presents a framework that designers can engage with to aid the design and development of a simulation/simulator. While the design and development process is inevitably messy and not always linear, there are several strategies for using the framework that deserve emphasis. First, designers are strongly encouraged to conduct a front-end analysis and end-user analysis as the first step in using the framework. Second, after completing the initial analyses, the elements within each phase of the framework might not necessarily be followed in a linear fashion. For example, while engaging in a literature review might benefit designers within their initial solutioning work, as outlined in Phase 1, designers might find value in engaging in a literature review throughout each phase in the framework, rather than exclusively during Phase 1. Third, conducting reviews is critical to each phase in the framework. Albeit, there are numerous review cycles, engaging in multiple reviews within each phase is crucial for influencing the final simulation/project intervention. Simulation/ simulator reviews will likely enhance an intervention, and designers should plan review sessions ahead of time to ensure reviewers are not caught off-guard with participation requests; planning for reviews ahead of time and gaining review buy-in can aid designers move through the review process with agility and ease. Overall, designing a simulation/simulator is not an easy task, but systematically approaching the design and development process can help designers break down a seemingly complex and time-confusing endeavor into manageable and meaningful segments. 

References

  1. Alam, A. (2023). Leveraging the power of ‘modeling and computer simulation’ for education: an exploration of its potential for improved learning outcomes and enhanced student engagement. In 2023 International Conference on Device Intelligence, Computing and Communication Technologies, (DICCT) (pp. 445-450). IEEE. https://doi.org/10.1109/DICCT56244.2023.10110159  
  2. Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: Seven research-based principles for smart teaching. San Francisco, CA: John Wiley & Sons. https://firstliteracy.org/wp-content/uploads/2015/07/How-Learning-Works.pdf
  3. Araujo, S., P Delaney, C., E Seid, V., R Imperiale, A., B Bertoncini, A., C Nahas, S., & Cecconello, I. (2014). Short-duration virtual reality simulation training positively impacts performance during laparoscopic colectomy in animal model: results of a single-blinded randomized trial: VR warm-up for laparoscopic colectomy. Surgical endoscopy, 28, 2547-2554. https://doi.org/10.1007/s00464-014-3500-3
  4. Badiee, F., & Kaufman, D. (2015). Design evaluation of a simulation for teacher education. Sage Open, 5(2), https://doi.org/10.1177/2158244015592454
  5. Baaki, J., Maddrell, J., & Stauffer, E. (2017). Designing authentic and engaging personas for open education resources designers. International Journal of Designs for Learning, 8(2). https://doi.org/10.14434/ijdl.v8i2.22427
  6. Baker, E., & Jenney, A. (2023). Virtual simulations to train social workers for competency-based learning: A scoping review. Journal of Social Work Education, 59(1), 8-31. https://doi.org/10.1080/10437797.2022.2039819
  7. Baloyi, L. L., Ojo, S. O., & Van Wyk, E. A. (2017). Design and Development of an Interactive Multimedia Simulation for Augmenting the Teaching and Learning of Programming Concepts. International Association for Development of the Information Society. https://api.semanticscholar.org/CorpusID:59479328
  8. Baniasadi, T., Ayyoubzadeh, S. M., & Mohammadzadeh, N. (2020). Challenges and practical considerations in applying virtual reality in medical education and treatment. Oman Medical Journal, 35(3), e125. https://doi.org/10.5001%2Fomj.2020.43
  9. Beaubien, J. M., & Baker, D. P. (2004). The use of simulation for training teamwork skills in health care: how low can you go? BMJ Quality & Safety, 13(suppl 1), i51-i56. https://doi.org/10.1136%2Fqshc.2004.009845
  10. Becker, A, Caswell, S.T., Jensen, M., Ulrich, G., and Wray, E. (2014). Online course design guide. Cambridge, Massachusetts: Massachusetts Institute of Technology. https://static1.squarespace.com/static/502c5d7e24aca01df4766eb3/t/55d4a3e8e4b06c47272a0c3b/1439998952570/Online-Course-Design-Guide.pdf
  11. Boshuizen H. P. A., Schmidt H. (2008). The development of clinical reasoning expertise. In Higgs J., Jones M. A., Loftus S., Christensen N. (Eds.), Clinical reasoning in the health professions (3rd ed., pp. 113–122). Butterworth Heinemann. https://i.clinref.com/data/uploads/books/Clinical-reasoning-in-the-health-professions.pdf
  12. Braun, V. & Clarke, V. (2019). Reflecting on reflexive thematic analysis, Qualitative Research in Sport, Exercise and Health, 11:4, 589-597, https://doi.org/10.1080/2159676X.2019.1628806
  13. Brown, A. H., & Green, T. D. (2015). The essentials of instructional design: Connecting fundamental principles with process and practice. Routledge. https://doi.org/10.4324/9781315757438
  14. Calhoun, C., Sahay, S., & Wilson, M. (2021). Instructional Design Evaluation. In J. K. McDonald & R. E. West (Eds.), Design for Learning: Principles, Processes, and Praxis. EdTech Books. https://doi.org/10.59668/id 
  15. Canhoto, A. I., & Murphy, J. (2016). Learning from simulation design to develop better. experiential learning initiatives: An integrative approach. Journal of Marketing Education, 38(2), 98-106. https://doi.org/10.1177/0273475316643746
  16. Carmichael, P. (2022). How eLearning Can Help Cultivate a Culture of Learning at Your Organization. E-Learning Industry. https://elearningindustry.com/how-elearning-can-help-cultivate-a-culture-of-learning-at-your-organization
  17. Carruth, D. W. (2017). Virtual reality for education and workforce training. 15th International Conference on Emerging eLearning Technologies and Applications (ICETA), pp. 1-6. https://doi.org/10.1109/ICETA.2017.8102472
  18. Cespedes, F. V., Aas, T., Hunt, A., Newton-Hill, H. (2022). Using Simulations to Upskill Employees. Harvard Business Review. https://hbr.org/2022/11/using-simulations-to-upskill-employees
  19. Cernusca, D., & Mallik, S. (2018). Making failure productive in an active learning context: Improved student performance in a pharmaceutics chemistry course. (M. Simonson, C. Schlosser, Eds.). Quarterly Review of Distance Education, 19(2), 37-53. https://www.infoagepub.com/products/Quarterly-Review-of-Distance-Education-19-2
  20. Chen, Edward T. (2008). Successful E-Learning in Corporations, Communications of the IIMA: 8(2), https://doi.org/10.58729/1941-6687.1080
  21. Chen, Z., & Klahr, D. (2008). Remote transfer of scientific-reasoning and problem-solving strategies in children. Advances in child development and behavior, 36, 419-470. https://doi.org/10.1016/S0065-2407(08)00010-4
  22. Correa, D. (2020). Virtual training and simulation market: Actually, a good investment option in current scenario. AP News. https://apnews.com/press-release/wired-release/52491646292e0d895e3c2cc18b4050c8
  23. Cordova, J. R., Sinatra, G. M., Jones, S. H., Taasoobshirazi, G., & Lombardi, D. (2014). Confidence in prior knowledge, self-efficacy, interest and prior knowledge: Influences on conceptual change. Contemporary Educational Psychology, 39(2), 164–174. https://doi.org/10.1016/j.cedpsych.2014.03.006
  24. Caniglia. J. (2019). Active learning - Simulations as a teaching strategy. Kent State University Center for Teaching and Learning. https://www.kent.edu/ctl/simulation-teaching-strategy
  25. Chernikova, O., Heitzmann, N., Stadler, M., Holzberger, D., Seidel, T., & Fischer, F. (2020). Simulation-based learning in higher education: a meta-analysis. Review of Educational Research, 90(4), 499-541. https://doi.org/10.3102/0034654320933544
  26. Chyung, S. Y. (2008). Foundations of instructional and performance technology. Human Resource Development. Amherst, MA: HRD Press. https://scholarworks.boisestate.edu/fac_books/83
  27. Davidsson, P., & Verhagen, H. (2017). Types of simulation. In Simulating Social Complexity (pp. 23-37). Springer, Cham. https://doi.org/10.1007/978-3-319-66948-9_3
  28. Dick, W., Carey, L., & Carey, J.O. (2009). The systematic design of instruction (7th ed). Columbus, Ohio. Pearson. 
  29. Dick, W., Carey, L., & Carey, J. O. (2005). The systematic design of instruction. Pearson/Allyn and Bacon.
  30. Dieker, L. A., Rodriguez, J. A., Lignugaris/Kraft, B., Hynes, M. C., & Hughes, C. E. (2014). The potential of simulated environments in teacher education: Current and future possibilities. Teacher Education and Special Education, 37(1), 21-33. https://doi.org/10.1177/0888406413512683
  31. Diwakar, S., Radhamani, R., Sasidharakurup, H., Kumar, D., Nizar, N., Achuthan, K., & Nair, B. (2015, September). Assessing students and teachers experience on simulation and remote. Biotechnology virtual labs: A case study with a light microscopy experiment. In Second International Conference on E-Learning, E-Education, and Online Training (pp. 44-51). Springer. http://dx.doi.org/10.1007/978-3-319-28883-3_6
  32. Dudek, J., & Heiser, R. (2017). Elements, principles, and critical inquiry for identity-centered design of online environments. Journal of Distance Education (Online), 32(2), 1–18. www.ijede.ca/index.php/jde/article/view/1037
  33. Edgar, A. K., Macfarlane, S., Kiddell, E. J., Armitage, J. A., & Wood-Bradley, R. J. (2022). The perceived value and impact of virtual simulation-based education on students’ learning: a mixed methods study. BMC Medical Education, 22(1), 823. https://doi.org/10.1186/s12909-022-03912-8
  34. Fink, L. D. (2013). Creating significant learning experiences: An integrated approach to designing college courses. John Wiley & Sons.
  35. Fulgencio, J., & Asino, T. I. (2021). Conducting a Learner Analysis. Design for Learning (J. McDonald, R. West, Eds.). https://doi.org/10.59668/id.
  36. Ford, C., McNally, D., & Ford, K. (2017). Using Design-Based Research in Higher Education Innovation. Online Learning, 21(3), 50-67. https://doi.org/10.24059/olj.v21i3.1232 
  37. Gamage, K. A., Wijesuriya, D. I., Ekanayake, S. Y., Rennie, A. E., Lambert, C. G., & Gunawardhana, N. (2020). Online delivery of teaching and laboratory practices: Continuity of university programmes during COVID-19 pandemic. Education Sciences, 10(10), 291. https://doi.org/10.3390/educsci10100291
  38. Gmeiner, F., Yang, H., Yao, L., Holstein, K., & Martelaro, N. (2023, April). Exploring challenges and opportunities to support designers in learning to co-create with AI-based manufacturing design tools. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-20). https://doi.org/10.1145/3544548.3580999
  39. Grossman, P., Compton, C., Igra, D., Ronfeldt, M., Shahan, E., Williamson, P. (2009). Teaching practice: A cross-professional perspective. Teachers College Record, 111(9), 2055–2100. https://doi.org/10.1177/016146810911100905
  40. Harden, R. M. (2001). AMEE Guide No. 21: Curriculum mapping: a tool for transparent and authentic teaching and learning. Medical teacher, 23(2), 123-137. https://doi.org/10.1080/01421590120036547
  41. Harless, J. (1973). An analysis of front-end analysis. Improving Human Performance: A Research Quarterly, 4, 229-244. https://doi.org/10.1002/pfi.4160260204 
  42. Harrington, D. W., & Simon, L. V. (2019). Designing a simulation scenario. Retrieved from https://europepmc.org/article/NBK/nbk547670
  43. Herzberg, F. (1968). One more time: how do you motivate employees? Harvard Business Review, 46(1), 53-63. https://hbr.org/2003/01/one-more-time-how-do-you-motivate-employees
  44. Semler, S. & Hill, C. (2023). Defining effective simulations. LearningSim. https://learningsim.com/designing-effective-simulations/
  45. Hirschheim, R. (2008). Some guidelines for the critical reviewing of conceptual papers. Journal of the Association for Information Systems, 9(8), 432–441. https://aisel.aisnet.org/jais/vol9/iss8/21/
  46. Hoadley, C. & Campos, F. C. (2022). Design-based research: What it is and why it matters to studying online learning, Educational Psychologist, 57:3, 207-220, https://doi.org/10.1080/00461520.2022.2079128
  47. Hossain, Z., Bumbacher, E., Brauneis, A., Diaz, M., Saltarelli, A., Blikstein, P., & Riedel-Kruse, I. H. (2018). Design guidelines and empirical case study for scaling authentic inquiry-based science learning via open online courses and interactive biology cloud labs. International Journal of Artificial Intelligence in Education, 28, 478-507. https://doi.org/10.1007/s40593-017-0150-3 
  48. Issenberg, B.S., Mcgaghie, W. C., Petrusa, E. R., Lee Gordon, D., & Scalese, R. J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical teacher, 27(1), 10-28. https://doi.org/10.1080/01421590500046924
  49. Ivens, S., & Oberle, M. (2020). Does scientific evaluation matter? Improving digital simulation games by design-based research. Social Sciences, 9(9), 155. https://doi.org/10.3390/socsci9090155
  50. Jaakkola, E. (2020). Designing conceptual articles: four approaches. AMS Rev 10, 18–26. https://doi.org/10.1007/s13162-020-00161-0
  51. Jonassen, D.H., Tessmer, M., & Hannum, W.H. (1998). Task analysis methods for instructional design. New York: Routledge. https://doi.org/10.4324/9781410602657
  52. Juul, J., Tran, C., Blair, L., & Dockterman, D. (2013, June). Designing for Productive Failure. In Games, Learning & Society. doi: 10.1184/R1/6686804.v1
  53. Kapur, M. (2016). Examining Productive Failure, Productive Success, Unproductive Failure, and Unproductive Success in Learning, Educational Psychologist, 51:2, 289-299, https://doi.org/10.1080/00461520.2016.1155457
  54. Kapur, M. (2008). Productive failure. Cognition and instruction, 26(3), 379-424. https://doi.org/10.1080/07370000802212669
  55. Kapur, M. (2010). Productive failure in mathematical problem solving. Instructional science, 38(6), 523-550. https://doi.org/10.1007/s11251-009-9093-x
  56. Kapur, M., & Lee, J. (2009). Designing for productive failure in mathematical problem solving. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 31, No. 31). https://doi.org/10.1080/10508406.2011.591717 
  57. Kaufman, R. & Guerra-Lopez, I. (2013). Needs assessment for organizational success. Alexandria, VA: ASTD Press.
  58. Kerrigan, J., Weber, K., & Chinn, C. (2021). Effective collaboration in the productive failure process. The Journal of Mathematical Behavior, 63, 100892. https://doi.org/10.1016/j.jmathb.2021.100892
  59. Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work. Educational Psychologist, 41(2), 75–86. https://doi.org/10.1207/s15326985ep4102_1
  60. Knowles, M. (1984). Andragogy in Action. San Francisco: Jossey-Bass.
  61. Koedinger, K. R., & Aleven, V. (2007). Exploring the assistance dilemma in experiments with cognitive tutors. Educational Psychology Review, 19(3), 239-264. https://doi.org/10.1007/s10648-007-9049-0
  62. Koivisto, J. M., Niemi, H., Multisilta, J., & Eriksson, E. (2017). Nursing students’ experiential learning processes using an online 3D simulation game. Education and Information Technologies, 22(1), 383-398. https://doi.org/10.1007/s10639-015-9453-x
  63. Koivisto, J. M., Haavisto, E., Niemi, H., Haho, P., Nylund, S., & Multisilta, J. (2018). Design principles for simulation games for learning clinical reasoning: A design-based research approach. Nurse Education Today, 60, 114-120. https://doi.org/10.1016/j.nedt.2017.10.002
  64. Krug, S. (2014). Don't Make Me Think, Revisited: A Common Sense Approach to Web Usability, (Vol. 3). Pearson Education.  
  65. Lee, W.W., Owens, D.L. (2004), Multimedia-Based Instructional Design, Second Edition. Pfeiffer.   
  66. Lewis, C. (2015). What is improvement science? Do we need it in education? Educational Researcher, 44(1), 54–61. Retrieved from https://doi.org/10.3102/0013189X1557038
  67. Lateef, F. (2010). Simulation-based learning: Just like the real thing. Journal of Emergencies, Trauma and Shock, 3(4), 348. doi: 10.4103/0974-2700.70743
  68. Lee, J., Kim, H., Kim, K. H., Jung, D., Jowsey, T., & Webster, C. S. (2020). Effective virtual patient simulators for medical communication training: a systematic review. Medical education, 54(9), 786-795. https://doi.org/10.1111/medu.14152
  69. Leone, P. (2020). Driving performance: A model to measure and report training ROI. Training Industries. https://trainingindustry.com/articles/measurement-and-analytics/driving-performance-a-model-to-measure-and-report-training-roi/
  70. Lukka, K., & Vinnari, E. (2014). Domain theory and method theory in management accounting research. Accounting, Auditing & Accountability Journal, 27(8), 1308–1338. https://doi.org/10.1108/AAAJ-03-2013-1265
  71. Mafumiko, F. S. M. (2006). Micro-scale experimentation as a catalyst for improving the chemistry curriculum in Tanzania. University of Twente, Enschede (Thesis). Retrieved from https://ris.utwente.nl/ws/files/6119658/thesis_Mafumiko.pdf
  72. Makransky, G., & Petersen, G. B. (2021). The cognitive affective model of immersive learning (CAMIL): A theoretical research-based model of learning in immersive virtual reality. Educational Psychology Review, 33(3), 937-958. https://doi.org/10.1007/s10648-020-09586-2
  73. Matei, A., & Matei, L. (2014). Instructional design for administrative sciences. A case study for civil servants training. Procedia-Social and Behavioral Sciences, 116, 1930-1933. https://doi.org/10.1016/j.sbspro.2014.01.497
  74. Maran, N. J., & Glavin, R. J. (2003). Low‐to high‐fidelity simulation–a continuum of medical education?. Medical education, 37, 22-28. https://doi.org/10.1046/j.1365-2923.37.s1.9.x
  75. Meiers, J., & Russell, M. J. (2019). An unfolding case study: Supporting contextual psychomotor skill development in novice nursing students. International journal of nursing education scholarship, 16(1), 20180013. https://doi.org/10.1515/ijnes-2018-0013
  76. McDonald, J. K. & West, R. E. (2021). Design for Learning: Principles, Processes, and Praxis (1st ed.). EdTech Books. https://doi.org/10.59668/id
  77. McKinney, W. J. (1997). The educational use of computer based science simulations: some lessons from the philosophy of science. Science & Education, 6, 591-603. https://doi.org/10.1023/A:1008694507127
  78. Momand, B., Hamidi, M., Sacuevo, O., & Dubrowski, A. (2022). The application of a design-based research framework for simulation-based education. Cureus, 14(11). https://doi.org/10.7759/cureus.31804
  79. Nieveen, N. (1999). Prototyping to reach product quality. In Design approaches and tools in education and training (pp. 125-135). Springer, Dordrecht.
  80. O'Rourke, J. A., Relf, B., Crawford, N., & Sharp, S. (2019). Are we all on course? A curriculum mapping comparison of three Australian university open-access enabling programs. Australian Journal of Adult Learning, 59(1), 7-26. https://ro.ecu.edu.au/ecuworkspost2013/6631/
  81. Palominos, E., Levett-Jones, T., Power, T., Alcorn, N., & Martinez-Maldonado, R. (2021). Measuring the impact of productive failure on nursing students' learning in healthcare simulation: A quasi-experimental study. Nurse Education Today, 101, 104871. https://doi.org/10.1016/j.nedt.2021.104871
  82. Pappas, C. (2016). Seven Tips to Curate Amazing eLearning Content. eLearning Industry. https://elearningindustry.com/7-tips-curate-amazing-elearning-content
  83. Pandey, A. (2019). Ten Content Curation Strategies for Corporate Training. eLearning Industry. https://elearningindustry.com/content-curation-strategies-corporate-training
  84. Peck, D. (2019). The Kirkpatrick model of training evaluation (with examples). Devinpeck.com. https://www.devlinpeck.com/content/kirkpatrick-model-evaluation
  85. Perez, N. (2020). The Beauty of Boxes, Lines, and Rows: A Simplified Mapping Plan to Design and Construct with Alignment. Presented at the Instructional Summit Design: Closing the Gap Conference. 
  86. Plomp, T. (2013). Educational design research: An introduction. In T. Plomp & N. Nieveen (Eds.), Educational design research - Part A: An introduction (pp. 10-51). Enschede, Netherlands: Netherlands Institute for Curriculum Development. https://www.slo.nl/international/@4315/educational-design/
  87. Plomp, T., & Nieveen, N. (2013). An introduction to educational design research. In .), Educational design research - Part B: An introduction. Enschede,Netherlands: Netherlands Institute for Curriculum Development. https://www.slo.nl/international/@14467/educational-design-research-part/
  88. Plotzky, C., Lindwedel, U., Sorber, M., Loessl, B., König, P., Kunze, C., ... & Meng, M. (2021). Virtual reality simulations in nurse education: a systematic mapping review. Nurse education today, 101, 104868. https://doi.org/10.1016/j.nedt.2021.104868
  89. Proudfoot, K. (2023). Inductive/deductive hybrid thematic analysis in mixed methods research. Journal of Mixed Methods Research, 17(3), 308-326. https://doi.org/10.1177/15586898221126816
  90. Raible, J. (2020). Supplemental: Instructional Design in the Consulting World. Introduction to Instructional Design. https://pressbooks.pub/itec51602/
  91. Reeves, T.C. (2000) Enhancing the worth of instructional technology research through 'design experiments' and other development research strategies. Symposium on International perspectives on instructional technology research for the 21st century (session 41.29: New Orleans, LA, USA). https://www.researchgate.net/publication/228467769_Enhancing_the_worth_of_instructional_technology_research_through_'Design_Experiments'_and_other_development_research_strategies_online
  92. Real, F.J., DeBlasio D., Beck A.F., et al. (2017). A virtual reality curriculum for pediatric residents decreases rates of influenza vaccine refusal. Acad Pediatr:17(4):431-435. https://doi.org/10.1016/j.acap.2017.01.010
  93. Sahasrabudhe, S. S., Murthy, S., & Iyer, S. (2012). Design based research to create instructional design templates for learning objects. Indian Inst. Technol. Bombay, Mumbai, India, Tech. Rep. DBR4LOS-NFE2013.
  94. Salimova, N. D., Salaeva, M. S., Mirakhmedova Sh, T., & Boltaboev, H. K. (2023). Simulation training in medicine. Journal of Modern Educational Achievements, 3(3), 138-142. https://scopusacademia.org/index.php/jmea/article/view/92
  95. Sanchez, L. M., & Cooknell, L. E. (2017). The Power of 3: Using adult learning principles to facilitate patient education. Nursing2020, 47(2), 17-19. https://doi.org/10.1097/01.nurse.0000511819.18774.85
  96. Sarikoc, G., Ozcan, C. T., & Elcin, M. (2017). The impact of using standardized patients in psychiatric cases on the levels of motivation and perceived learning of the nursing students. Nurse education today, 51, 15-22. https://doi.org/10.1016/j.nedt.2017.01.001
  97. Schwartz, D. L., & Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and instruction, 22(2), 129-184. https://doi.org/10.1207/s1532690xci2202_1
  98. Schmidt H. G., Loyens S. M. M., van Gog T., Paas F. (2007). Problem-based learning is compatible with human cognitive architecture: Commentary on Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42(2), 91 97. https://doi.org/10.1080/00461520701263350
  99. Shorey, S., & Ng, E. D. (2021). The use of virtual reality simulation among nursing students and registered nurses: A systematic review. Nurse education today, 98, 104662. https://doi.org/10.1016/j.nedt.2020.104662
  100. Sonderegger-Wakolbinger, L. M., & Stummer, C. (2015). An agent-based simulation of customer multi-channel choice behavior. Central European Journal of Operations Research, 23(2), 459-477. https://doi.org/10.1007/s10100-015-0388-5
  101. Song, Y., & Kapur, M. (2017). How to flip the classroom-" productive failure or traditional flipped classroom" pedagogical design?. Educational Technology & Society, 20(1), 292-305. https://doi.org/10.3929/ethz-b-000128354
  102. Song, Y. (2018). Improving primary students’ collaborative problem solving competency in project-based science learning with productive failure instructional design in a seamless learning environment. Educational Technology Research and Development, 66(4), 979-1008. https://doi.org/10.1007/s11423-018-9600-3
  103. Stefaniak, J. E. (2021). Conducting Needs Assessments to Inform Instructional Design Practices and Decisions. In S. Conklin, B. Oyarzun, R. M. Reese, & J. E. Stefaniak (Eds.), A Practitioner's Guide to Instructional Design in Higher Education. EdTech Books. https://doi.org/10.59668/164.4543
  104. Stefaniak, J. E. (2018). Performance Technology. In R. E. West (Ed.), Foundations of Learning and Instructional Design Technology. EdTech Books. https://doi.org/10.59668/3
  105. Stefaniak, J. E., Reese, R. M., & McDonald, J. K. (2020). Design Considerations for Bridging the Gap Between Instructional Design Pedagogy and Practice. The Journal of Applied Instructional Design, 9(3). https://doi.org/10.51869/93jsrmrjkmd. 
  106. Tamim, R.M. Bernard, E. Borokhovski, P.C. Abrami, and R. F. Schmid, What forty years of research says about the impact of technology on learning a second-order meta-analysis and validation Study,” Rev. of Educational Research, vol. 81, no. 1, pp. 4-28, 2011. https://doi.org/10.3102/0034654310393361
  107. Tawfik, G.M., Dila, K.A.S., Mohamed, M.Y.F. et al. (2019). A step by step guide for conducting a systematic review and meta-analysis with simulation data. Trop Med Health 47, 46. https://doi.org/10.1186/s41182-019-0165-6
  108. Tessmer, M. (2013). Planning and conducting formative evaluations. Routledge.
  109. Torrance, M., Bozarth, J., & Jackson, J. (2020). Covid-19 and L&D: Present and future. The Learning Guild Research. Learning Solutions Magazine. https://www.learningguild.com/insights/252/covid-19-and-ld-present-and-future/
  110. Tran, L., Sindt, K., Rico, R., & Kohntopp, B. (2021). Working With Stakeholders and Clients. In J. K. McDonald & R. E. West (Eds.), Design for Learning: Principles, Processes, and Praxis. EdTech Books. https://doi.org/10.59668/id
  111. Umanath, S., & Marsh, E. J. (2014). Understanding how prior knowledge influences memory in older adults. Perspectives on Psychological Science, 9(4), 408. https://doi.org/10.1177/1745691614535933
  112. Vafa, S., & Chico, D. E. (2013). A needs assessment for mobile technology use in medical education. International Journal of Medical Education, 4, 230-235. https://doi.org/10.5116/ijme.5259.4a88
  113. Van Tiem, D. M., J. L. Moseley, and J. C. Dessigner (2004). Fundamentals of Performance Technology (2nd ed.). Washington, DC: The International Society for Performance Improvement. https://doi.org/10.1002/pfi.4140400313 
  114. Vanderhoven, E., Schellens, T., Vanderlinde, R., & Valcke, M. (2016). Developing educational materials about risks on social network sites: a design-based research approach. Educational technology research and development, 64(3), 459-480. https://www.learntechlib.org/p/193557/ 
  115. Wain, A. (2017). Learning through reflection. British Journal of Midwifery, 25(10), 662-666. https://doi.org/10.12968/bjom.2017.25.10.662
  116. Wang, R., Liu, J., & Yu, Q. (2020). The design and development of virtual simulation experiment for online learning. In 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence (pp. 1-4). https://doi.org/10.1145/3446132.3446164
  117. Wästberg, B.S.; Eriksson, T.; Karlsson, G.; Sunnerstam, M.; Axelsson, M.; Billger, M. Design considerations for virtual laboratories: A comparative study of two virtual laboratories for learning about gas solubility and colour appearance. Educ. Inf. Technol. 2019, 24, 2059–2080. https://doi.org/10.1007/s10639-018-09857-0
  118. Whitworth, K.; Leupen, S.; Rakes, C.; Bustos, M. Interactive computer simulations as pedagogical tools in biology labs. CBE—Life Sci. Educ. 2018, 17, ar46. [CrossRef] [PubMed]. https://doi.org/10.1187/cbe.17-09-0208
  119. Wiley, D. (2018). Project Management for Instructional Designers (1st ed.). EdTech Books. https://doi.org/10.59668/pm4id 
  120. Wilson, L., Chse, C. A., & Wittmann-Price, R. A. (Eds.). (2018). Review manual for the certified healthcare simulation educator exam. Springer Publishing Company.
  121. Young, P. A. (2014). The presence of culture in learning. In Handbook of research on educational communications and technology (pp. 349–361). New York, NY: Springer. https://doi.org/10.1007/978-1-4614-3185-5_28