E-learning within educational and corporate organizations is not a new phenomenon (Chen, 2008), but during 2020, the COVID-19 global pandemic unexpectedly forced a number of corporations’ workforces online (Torrance, Bozarth & Jackson, 2020). This shift online impacted learning and development programs, as traditional-based learning settings transitioned to asynchronous, synchronous, or hybrid formats (Torrance, Bozarth & Jackson, 2020). Not surprisingly, the pandemic increased the interest and use of e-learning (Gamage, 2020) and, particularly, virtual simulations (Correa, 2020). The curiosity in emerging technology has expanded to a variety of fields (Torrance, Bozarth & Jackson, 2020), with several industries focused on exploring the use of virtual simulations and simulators to benefit employee performance development (Frank et al., 2022). However, with ever-evolving technological advancements in software and hardware, and persisting interest in simulations and simulators as viable e-learning solutions, there is a gap in literature pertaining to the simulation/simulator design process (Cernusca & Mallik, 2017; Fink et al., 2021).
Simulation research continues to increase, and many studies illustrate the numerous benefits of using simulations for learning, such as supporting psychomotor skill development, development of contextual knowledge through repeated practice, or improving design-making processes (Plotzky et al., 2021; Meiers & Russell, 2019). With continued use and development of simulations to aid learning outcomes and support hands-on learning, professional development, and other training needs (Alam, 2023), several studies have used design-based research (DBR) strategies to develop simulations (Momand et al., 2022; Ivens & Oberle, 2020; Baloyi et al., 2017; Hossain et al., 2018). These studies have reinforced the value of using DBR to enhance simulations. However, while scholars have argued for the value of using DBR to enhance simulations, there is a lack of applied research detailing the ways designers can strategically use DBR to create simulations for learning (Cernusca & Mallik, 2017; Badiee & Kaufman, 2015; Koivisto et al., 2018). Simulation guidance can aid designers, since virtual simulation design and development is challenging. For instance, the purpose and function of virtual simulations is unique, and designers can benefit from foreknowledge and when to use or not use a virtual simulation (Salimova et al., 2023; Gmeiner, 2023). Fidelity (Chernikova et al., 2020), context (Carruth, 2017), tasks (Grossman et al., 2009), and scenarios (Pappas, 2016) are also important elements to simulations/simulators, and designers can benefit from targeted insights into the design and development process.
Differences in learning objectives, contexts, and fields can influence differences in simulator design principles or strategies, but overriding design and development similarities indicates the value in a centralized simulation design-based research framework. For instance, in the medical field patients are often the key focus of simulations (Salimova et al., 2023), but in the aviation field, pilot tasks, visibility, and maneuverability are often key simulation features (Gmeiner, 2023); differences in purpose and contexts of simulations often influence the degree of fidelity needed, settings, actors, and scenarios. Regardless, there are a number of similarities across simulation design and development (e.g., intervention goals, learner characteristics, design elements, etc.), whether designing for medical, aviation, law enforcement, customer service or the like. A centralized design-based research approach that can be adapted for a variety of fields can benefit designers seeking to identify important design-based research practices for simulations/simulators. In addition, with developments in technology, such as AR/VR/GenAI, some studies suggest virtual simulations are increasingly being explored or developed (Shorey & Ng, 2021). Consequently, a virtual simulation-based DBR model, compared to traditional DBR models, might be more beneficial to designers seeking to design and develop a simulation/simulator by surveying simulation elements embedded in a design-based research framework like when simulations might be most beneficial to use as an intervention, key learner characteristics to consider with simulations/simulators, types of simulations/simulators, simulation-design checklists, etc.
The purpose of this article is to illustrate a simulation and simulator design process, using a conceptual design framework built from a synthesis of four DBR models, to provide instructional designers with a systematic method for virtual simulation/simulator design and development. Drawing from literature addressing DBR models, simulation principles, and practitioner experiences, this article provides answers to the question: What does the process look like to build a virtual simulation/simulator for learning? In addition to the framework, strategies, prompts, and reflective questions have been integrated in the article to support instructional designers to think through theoretical strategies, design principles, stakeholder collaboration, review cycles, and other elements pertaining to virtual simulation/simulator creation and implementation. The article has been written in a way that instructional designers working in different organizations, whether within the field of education or beyond, can benefit from its contents.
The first section of this article provides a brief review of literature pertaining to simulation and simulators for learning as well as design-based research. The following Methodology section briefly details the methods used in generating the article. The next Conceptual Framework section highlights the simulation/simulator design process via a conceptual framework developed for this study. The fourth section outlines the systematic stages of the conceptual framework and provides guidance to support designers with their own virtual simulation/simulator progress; it also offers instructional designers, herein referred to as “designers,” guidance on working with stakeholders. The final section of the article features a conclusion that reinforces ways designers can engage with the framework.
Simulation-based learning is an instructional strategy benefiting from digital advancement (Whitworth et al., 2018). One benefit of simulations is practice opportunities (Diwakar et al., 2015), as they require individuals to engage in real-life problem-solving without expert guidance (Chernikova et al., 2020). Providing individuals ample practice opportunities can serve to reduce task complexity, mitigate task confusion, and serve as valuable learning or teaching resources (Grossman et al., 2009; Chernikova et al., 2020). Interactive virtual simulations for performance practice can offer similar traditional hands-on opportunities for learners to engage in learning activities (Wästberg et al., 2019), and it allows individuals to engage in the learning process (Diwakar et al., 2015). Virtual simulations have been used across industries. For example, in the medical education field, educators have leveraged simulators and virtual reality laboratories to mitigate resources (i.e., logistical support, coordinated training times, training site availability, etc.). In other industries, simulators have been used to either recreate low-frequency events (i.e., mechanical failure), dangerous situations (i.e., law enforcement), or time-specific situations (i.e., weather-related conditions) (Carruth, 2017). Virtual simulations can be cost-saving by promising a safe and controlled environment for learners to practice applying critical skills (Carruth, 2017). While research suggests that virtual simulators can provide meaningful learning opportunities by practicing skills and applying knowledge to situations (Grossman et al., 2009), the use and function of virtual simulations vary across fields.
This paper defines the term “simulation” in the context of the educational field. That is, a simulation is defined as a tool that replicates the real-world characteristics of an event or situation (Beaubien & Baker, 2004) that can be manipulated by participants (Jones, et al., 2015; Kaufman & Ireland, 2016; Fink et al., 2021). Conversely, the term “simulator” is defined as a specific type of technological tool (i.e., hardware and/or software) that individuals must interact with either physically or virtually. While research suggests a variety of important elements for simulations, including real-world situations (Davidsson & Verhagen, 2017), genuine interactions (Chernikova et al., 2020), real or virtual objects or persons (Chernikova et al., 2020), environmental settings, whether real-world settings with augmented reality (AR) or virtual settings with virtual reality (VR) (Araujo et al., 2014), and infusing critical thinking and problem-solving elements (Chernikova et al., 2020), less has been documented about how to design and develop a virtual simulation with a simulator for learning or development. There is limited research on designing simulations in relation to instructional strategies (Chernikova et al., 2020), and studies call for more research on design criteria when creating simulations for learning (Cernusca & Mallik, 2017; Badiee & Kaufman, 2015; Koivisto et al., 2018).
Design-based research (DBR) is a research methodology that seeks to understand the world and attempt to change it through an interventionist approach (Hoadly & Campos, 2022) by often using a number of iterative phases to craft and refine an intervention (Mafumiko, 2006). DBR is similar to action research, as it usually involves problem identification, assessment, and analysis within a learning context, and the implementation and evaluation of a change or intervention to determine whether the problem was addressed (Lewis, 2015; Plomp, 2013). From an empirical perspective, a DBR study uses its iterative phases as treatments that are re-worked and ultimately linked to some sort of hypothesized outcome (Hoadly & Campos, 2022). A DBR study might look similar to a laboratory experiment, where researchers document a baseline, collect data during the many iterative phases, and generate new, refined versions of an intervention for a particular context (Hoadly & Campos, 2022). However, unlike positivistic experiments that seek to achieve statistical generalizations or causal inference as an outcome, DBR researchers aim to achieve anticipated design outcomes and generate theories which are related to interpretivist traditions (Legg & Hookway, 2020; Hoadly & Campos, 2022).
DBR is an iterative approach that requires many cycles of design and in-situ assessment and evaluation (Ford et al., 2017). Several canonical DBR models have been generated over the years by researchers such as Reeves (2000), McKinney (2001), Wademan (2007), and Mafumiko (2006); the models created by these researchers illustrate design and development processes that mostly follow the same overarching and cyclical phases: 1) Analysis and exploration, 2) Design and construction, and 3) Evaluation and reflection (Sahasrabudhe, Murthy & Iyer, 2012). While the aforementioned researchers’ models follow the same broad phases, each model emphasizes different parts of the DBR process (Sahasrabudhe, Murthy & Iyer, 2012). For example, Reeves’s (2000) model focuses on the refinement of an intervention during every feedback stage but provides little clarity on the research cycles. Wademan’s (2007) model illustrates stakeholders' engagement at different stages but does not include information about participant review sizes. McKenney’s (2001) model provides a detailed number of participants during review cycles but does not mention the number of stakeholders or their engagement in a study. Mafumiko’s (2006) model illustrates various stakeholders and their engagement but does not illustrate design guidelines for creating a prototype (Sahasrabudhe, Murthy & Iyer, 2012). There is goodness in each of the researchers’ canonical DBR models, and this article seeks to adapt segments of each model into a new conceptual framework that offers a holistic design approach to aid instructional designers in building virtual simulations/simulators.
Several scholars have used DBR principles and practices to design simulations. In some instances, scholars have argued for the use of DBR to aid in the design and development of authentic learning environments, indicating that simulation-based education (SBE) frameworks are too limiting (Momand et al., 2022; Ivens & Oberle, 2020). In other examples, scholars have reported on the benefits of iterative and cyclical design features of DBR to improve the learning experience and better support the usefulness for practical virtual simulation development (Ivens & Oberle, 2020), theoretical development (Baloyi et al., 2017), methodological alignment, or adjustments to support to dynamic contexts (Hossain et al., 2018). Although studies draw on the benefits of using DBR for simulation design and development, virtual simulation design principles vary across fields and contexts. For instance, virtual simulation design and development in the medical field often focuses uniquely on patient experiences and include recommendations for differing degrees of simulation fidelity (Salimova et al., 2023), whereas virtual aircraft simulations often focus on high fidelity simulations to support tasks such as maneuvering, strategies when experiencing differing visibility, etc. (Gmeiner, 2023). Given the range of contexts, tasks, and fields, designers might benefit from a centralized guide to aid in the determination and selection of key design-based research practices for simulations/simulators.
This article uses a “method theory” approach, a conceptual method introduced by Lukka and Vinnari (2014). Method theory integrates various concepts, streams of literature, and theories (Jaakkola, 2020), leading some to confuse the approach with a systematic literature review (Jaakkola, 2020). What sets this approach apart is its classification of theories or concepts into two areas: 1) a framework for the study, and 2) study data. Four canonical DBR theories constitute the study’s framework, and unlike empirical research, data for this study were drawn from numerous theories and concepts through a process involving the “assimilation and combination of evidence” from external literature (Hirschheim, 2008).
The theoretical framework for this article is grounded in a set of principles outlined in canonical DBR models created by Reeves (2000), McKinney (2001), Wademan (2007), and Mafumiko (2007). Employing a method theory approach, the deliberate selection of DBR principles from the aforementioned theorists was intentional, as this study sought to propose a strategy that utilizes elements from each of the four DBR models to construct a new framework tailored specifically for simulation and simulator design and development. In other words, the theoretical framework for the study comprises four DBR models (Reeves, 2000; McKinney, 2001; Wademan, 2007; Mafumiko, 2007), while a variety of scholarly theories, concepts, and practices are cited in the study and utilized as its data to inform both the creation of a conceptual framework and practices within each phase of the framework.
A systematic literature review was conducted to obtain the data for the study. Several theories and concepts across a range of fields (e.g., business, medicine, etc.) were selected as having the potential to provide practical application for the creation of a framework for the instructional design field. To conduct the review, the researcher adapted Tawfik et al.’s (2019) systematic literature review steps. The steps taken include:
Conduct a preliminary search. The preliminary search was used to validate ideas within Google Scholar to determine what literature was available regarding DBR, simulations, simulators, and instructional design principles.
Generate inclusion and exclusion criteria. The inclusion criteria included: 1) any study addressing simulation creation in the context of learning, 2) any study addressing simulator creation in the context of learning, 3) articles addressing the four aforementioned design-based research theorists, 4) articles addressing simulation/simulator design or development and instructional design, 5) English-only articles. The exclusion criteria included: 1) Simulation / simulator studies not within the context of learning, 2) abstract-only articles, 3) articles without full text available, 4) case reports, series, or systematic review studies, 5) non-English articles.
Use a search strategy. The search strategy included Google Scholar, which contained PubMed, Springer, and several Education journals (e.g., Science & Education, Educational Technology & Society, etc.). To search in Google Scholar, key descriptors were used across several of the concepts. For instance, when searching for studies addressing simulators, key search terms such as “simulator AND learning [2019 or more recent]” and “design OR simulator OR learning” were used. Iterations were made throughout the search process for each key concept.
Search databases. The researcher searched for literature through Google Scholar, which snowballed to other journals, as discussed earlier. All articles meeting the study criteria were screened for essential information, then downloaded by the researcher.
Screen titles and abstracts. The researcher conducted a brief screen of the titles and abstracts of the articles identified in the initial search process and cross-referenced the search criteria listed in the second step, to decide whether to include or exclude the articles for the study.
Download full-text and screen. The researcher downloaded the full articles and screened the articles, with the criteria, listed in step two, to aid in deciding whether to include or exclude the information for the study.
Data extraction and quality assessment. The researcher reviewed each study and examined elements that either related to 1) DBR frameworks from Reeves (2000), McKinney (2001), Wademan (2007), or Mafumiko’s (2007) models, or 2) design principles for simulation, simulator, or instructional design practices.
In total, 173 articles were identified, and 85 articles were included in this paper. Deviation from Tawfik et al.’s (2019) 13- step process occurred. In particular, only 7- steps were used to collect data for the study. In addition, a qualitative analysis, instead of a statistical analysis one, was utilized to examine the identified articles. This intentional deviation aimed to assimilate and combine evidence from external literature for data collection purposes (Hirschheim, 2008). To perform the qualitative, thematic analysis, Braun and Clarke’s (2019) reflective thematic six-step analysis was used, based on a deductive coding framework generated based on DBR principles (Proudfoot, 2023) and applied top-down to the dataset. The researcher used the deductive codes to tag theories, concepts, or practices from the extracted data. These tags were then grouped into categories, and the categories were further organized into themes.
After conducting the literature review and extracting and analyzing the articles, the researcher synthesized key themes. In total, the conceptual framework is made up of 3 core phases. Serving as a theoretical framework, the phases and themes were abstracted based on the codes produced by the DBR principles as outlined by Reeves (2000), McKinney (2001), Wademan (2007), and Mafumiko (2007). Overall, the deductive framework resulted in 3 phases with 7 themes and 20 sub-themes, as presented in Table 1.
Table 1
Thematic Findings
DBR Principles | Themes | Sub-Themes |
Phase 1. Analysis & Exploration | Scoping | Front-End Analysis |
Plausible Solution Identification | ||
Literature Review | ||
Phase 2. Design & Construction | Mapping | Design Intervention Map |
Review Map with Stakeholder | ||
Curating | Outline Design Principles | |
Curate Real-World Scenarios | ||
Developing | Review and Refine Collected Data | |
Design Prototype | ||
Develop Initial Prototype | ||
Iterating | Conduct First Review | |
Develop Prototype Two | ||
Conduct Second Review | ||
Develop Prototype Three | ||
Conduct Third Review | ||
Develop Prototype Four | ||
Implementing | Finalize Prototype | |
Launch Prototype | ||
Phase 3. Evaluation & Reflection | Analyzing | Reflect on Feedback |
Identify Design Principles | ||
Iterate Further |
Merging the literature review thematic outputs and practitioner experiences resulted in the creation of a systematic conceptual framework artifact. This framework artifact describes the design process for building a simulation/simulator for designers across organizations. The framework leverages the results of the thematic analysis as its organizing structure. Researcher experiences influenced the development of anticipated outcomes for each sub-theme in the framework, as illustrated in the following section.
Synthesizing canonical DBR models, a conceptual framework for virtual simulation and simulator design was created (refer to image 1). This model was adapted from four DBR models (Reeves, 2000; McKinney, 2001; Wademan, 2007; Mafumiko, 2007). It integrates practices for instructional designers across industries. The model contains three distinct phases with suggestions and prompts to aid designers in designing and developing simulations/simulators.
While the conceptual framework for simulation design has a number of similarities to traditional instructional design frameworks, this framework offers specific design-based suggestions to aid designers in their design and development processes. Targeted recommendations may aid designers in making decisions or trade-offs specific to virtual simulations/simulator as they work within the framework. For example, budgeting is a critical indicator of whether or not a simulation/simulator is a viable option (Chernikova et al., 2020). This paper recommends designers engage in budgeting and resource discussions as one of their first actions. Scoping budgeting and resources in regards to virtual simulations is critical to deciding whether or not a virtual simulation or simulator is a feasible learning option. Other examples of recommendations include guidance for when to use virtual simulation/simulator, focus areas to consider when engaging in a literature review for simulation/simulator, simulation content curation strategies, and a simulation design checklist. Overall, each phase and theme embedded within the framework offers targeted guidance, support, tips, or recommendations aimed to support designers with their simulation/simulator design and development. The following details provide a broad review of each phase followed by an image of the conceptual framework:
Phase 1. Within this phase, designers should assess and scope the intervention need and determine whether a simulation/simulator is a viable design solution. Designer actions in this phase include conducting a front-end analysis of real-life problems, identifying plausible design solutions, and conducting a literature review.
Phase 2. This phase contains the bulk of simulation/simulator creation. Designer actions within this phase include mapping, curating, developing, iterating, and implementing a simulation/simulator, while working closely with stakeholders and expert reviewers.
Phase 3. Within this phase, designers should reflect on lessons learned and scope future intervention needs. Designer actions should include reflecting on feedback, generating theories or design principles, and identifying if further iterations are needed.
Figure 1
Conceptual Framework for a Virtual Simulation/Simulator Design Process
Designers should engage with this framework and expect to constantly move between the phases. For instance, a designer might work on elements of Phase 1, such as identifying plausible solutions and reviewing literature, while also considering elements of Phase 3, such as evaluation techniques. Regardless of a designer’s design progress, this model invites and expects designers to move iteratively across each phase of the framework.
Designers will notice the conceptual framework does not include recommended steps. While the framework’s phases represent loosely linear procedures, design and development processes are not always neatly packaged into steps. Consequently, while designers are encouraged to move through each phase via the listed series of actions, they might expect to move back and forth between and across phases, based on their unique needs. That said, the first set of actions in Phase 1 (i.e., front-end and learner analyses) should be conducted before any other actions. After conducting these actions, designers may feel at liberty to move through the conceptual framework, based on their needs.
In Phase 1, designers should engage in analyzing and exploring the learning needs to identify whether an intervention is needed. Steps in this analysis include identifying learner, herein referred to as “end-user,” characteristics such as prior knowledge, skills, or abilities, and perceived needs (Brown & Green, 2015; Ambrose et al., 2010; McDonald & West, 2021; Mafumiko, 2006). In addition, a list of experts should be identified by designers to support the intervention design and development process. During this phase, designers should not look for ways they can use a simulation/simulator to address a problem; rather, designers should consider the range of learning interventions available and only select a simulation/simulator if it is deemed the best intervention based on organizational goals, learning objectives, and end-user needs, among other factors.
Starting any instructional design project with a front-end analysis allows designers to determine whether a performance gap exists, and, if so, how to close performance gaps with a results-driven solution (Lee & Owns, 2004; Matei & Matei, 2014). Problem identification is critical to determine whether a learning solution is warranted and what might be influencing or contributing to the gap (Raible, 2020; Kaufam & Guerra-Lopez, 2013). Aspects that are part of a front-end analysis include performance, cause, and needs analyses, etc., which provide a structure to identify learner characteristics, understand the problem, and uncover root-cause performance gaps (Dick et al., 2005).
Identifying the right individuals or groups to collect information from is an important first step in preparing to conduct a front-end analysis. However, in some educational or organizational settings, designers might not always have access to their end-user populations. For instance, designers might be required to design materials for employees or students (i.e., end-users) who have yet to start within an organization. While designers might not always have access to their end-users, designers might have access to former students, current employees or practitioners, internal or external stakeholders, or an organization’s vision, mission, goals, or critical organizational issues (Stefaniak, 2018; Van Tiem et al., 2012). If designers are unable to engage with learners prior to the design and development of an intervention, designers should identify former learners, current employees, managers, stakeholders, practitioners, and any other relevant individuals or groups that may be key information holders to involve in the front-end analysis.
To conduct a front-end analysis, a variety of instruments can be used or created to collect qualitative and/or quantitative data (e.g., surveys, focus groups, semi-structured interviews, etc.). Examples of quantitative data that designers might want to collect and review, if possible or available, are performance ratings, assessments, exam scores, or new hire/tenured employee performance metrics; numerical information can aid designers in identifying historical trends or performance across student or employee groups. Qualitative inquiry is helpful when trying to learn more about performance contexts or work experiences such as the working environment, task or performance expectations, problem or work-related challenges, or performance needs. Designers should generate a question list and meet with different employee groups (e.g., former students, current employees, managers, stakeholders, practitioners, etc.) to uncover evidence related to the problem and perceived cause(s) of the problem as well as performance gaps (Stefaniak et al., 2020; Chyung, 2008; Harless, 1973). An evaluation of any existing content or materials to identify potential gaps should also be conducted (Vanderhoven et al., 2016); this might include current or past curriculum or materials, job-aids, performance evaluations, etc.
After selecting instruments, items or questions should be created that focus on learning or workplace conditions that the end-users will be expected to operate in, and the tasks that they will be required to perform; identifying answers to these concepts can help designers more accurately diagnose the problem and experiment with initial design solutions. As previously mentioned, key groups (e.g., learners, former students, current employees, managers, or internal and/or external stakeholders, etc.) should be involved in the front-end analysis and asked a variety of questions to uncover information about the problem and potential or perceived performance gaps. The overall outcome of the front-end analysis should provide designers with a better understanding of what the problem is, perceived causes of the problem, goals of the intervention, performance needs, stakeholder performance expectations, and available resource. Refer to Table 2 for more details on example questions to aid front-end analysis with end-users or stakeholders.
Table 2
Front-End Analysis Questions
The HPT Model (Own elaboration based on Van Item, Moseley, & Dessinger, 2004) | Front-End Analysis Questions (Own elaboration based on Harless, 1973; Dick, Carey & Carey, 2009; Fink, 2003; Herzberg, 1968) |
Performance Analysis |
|
Cause Analysis |
|
Needs Analysis |
|
Task Analysis |
|
Organizational Analysis |
|
Resource Analysis |
|
Designers can use the questions in Table 2 to aid their front-end analysis efforts; although, it is important to note that front-end analysis questions are not exhaustive. Designers should determine whether new questions or changes to the questions in the Table need to be made, given their own design situ, to best aid them in identifying practical problems.
Purpose of virtual simulation. Within the context of virtual simulations, designers to determine whether or not the goals of an intervention are to produce work-ready or work-safe end-users (Edgar et al., 2022), practice developing professional identities (Edgar et al., 2022), mitigate resources (Carruth, 2017), or practice within a safe environment to recreate either low-frequency or dangerous situations, such as an aircraft crash or high-risk law-enforcement situation (Carruth, 2017). If the results of the front-end analysis align with any of the aforementioned instances, a virtual simulation might be a valuable learning strategy (Edgar et al., 2022).
While a front-end analysis is critical to better understanding the problem and helping to inform ways to identify plausible solutions, designers should conduct some form of end-user (i.e., learner) analysis, even if it is based on previous trends or aggregated understanding of previous learners. It is important for designers to obtain as much information about their end-users as possible, rather than relying on assumptions (Fulgencio & Asino, 2021). The focus of this type of analysis is to identify potential end-user characteristics, prerequisite knowledge, skills, or abilities, and attitudinal information (Baaki et al., 2017; Dudek & Heiser, 2017).
If designers are able to gain access to end-users, similar qualitative and quantitative data collection strategies and questioning techniques should be used, as in the front-end analysis. The aim of analyzing end-users is to gain details about the learners such as characteristics, prior knowledge, and demographic information. If practitioners, such as trainers or educators, were not involved in the initial front-end analysis, designers should involve these types of individuals in the analysis, unless an organization does not have practitioners or they are unavailable to participate. Importantly, depending on the organization or institution, it may or may not be appropriate to ask for certain demographic information (i.e., ethnicity, age, gender, etc.). Before collecting demographic information, designers must ensure demographic-based questions are approved by human resources or a related governing group. A key question designers should ask when considering whether demographic information is needed is: What demographic information is critical to the intervention design, if any? Designers can refer to Table 3 to review questions for identifying information about end-users and practitioners.
If designers are unable to gain immediate access to their end-users and/or practitioners, designers should review broader employee or student population data, if possible, to help inform about potential end-user characteristics (Baki et al., 2017). For example, reviewing employee information, a designer might decide to split end-users into two groups such as “typical user” and “extreme user” – the designer can decide whether to split the two user groups based on one characteristic (e.g., technological literacy) or two or more characteristics (e.g., competence and performance levels).
During the end-user analysis, designers should aim to identify as many materials or records as possible related to end-user information. This might include programmatic expectations or job descriptions that list expected learner or employee prerequisite competencies, job duties, etc.
Table 3
End-User Analysis with Questions
Components of End-User Analysis | Questions for End-Users (Learners) | Questions for End-Users (Practitioners) |
End-User Characteristics (Adams Becker et al., 2014; Dick et al., 2009; Jonassen et al., 1999; Fink, 2013) |
|
|
Prior Knowledge (Ambrose et al., 2010; Cordova et al., 2014; Dochy et al., 2002; Umanath & Marsh, 2014) |
|
|
Demographics (Young, 2014) |
|
|
Designers should use the questions in Table 3 to aid in collecting more details about end-users and practitioners. If end-users are unavailable, but practitioners are available, designers should ask practitioners questions in the end-user (learner) column to gain a sense of prior learner characteristics. It is important to note that the questions in Table 3 are not exhaustive, and designers should determine whether new questions or changes to the questions in the table need to be made, given their own design needs.
Overall, designers should synthesize the information from the front-end and end-user analyses to determine whether a learning solution is warranted. If a learning solution is warranted, designers should pay special attention to identifying the purpose of the organizational goals, scope of the need, intended end-user needs, challenges end-users might face, and the desired outcome(s) of the intervention (Raible, 2020).
End-users and virtual simulation. Within the context of simulations, prior learning knowledge can be a key indicator for simulation education. In particular, end-users who are already familiar with theoretical concepts might benefit more from simulation education compared to learners who have less awareness of theoretical concepts, as a simulation could influence cognitive overlap for problem solving without knowledge of higher order constructs (Kirschner et al., 2006). Other scholars argue early simulation learning might benefit end-users when attempting to gain or restructure higher order concepts (Boshuizen & Schmidt, 2008). Regardless, depending on the theoretical knowledge of end-users, those with less theoretical knowledge prior to simulation engagement might require more instructional guidance compared to advanced learners with theoretical knowledge (Schmidt et al., 2007). In addition, if a designer is considering the possibility of using a simulation technology, such as virtual reality (VR), designers should be aware of potential end-user sensitivities to head-mounted displays (HMD), history of motion sickness, or reluctance to computers or newer technologies (Baniasadi et al., 2020). End-user characteristics and needs should be given careful and thoughtful attention when considering simulations and simulators; designers should use gathered data to make decisions that reflect end-user characteristics, knowledge, and where possible, demographics.
Based on the results of the front-end analysis, designers should identify one or more initial intervention solutions to meet the needs of the organization and end-users. Designers should illustrate how the identified interventions are plausible solutions that address the identified problem(s), organizational goal(s), and how they might aid in facilitating performance change (Van Tiem et al., 2012). Per Wademan’s (2007) model, this process should be iterative and conducted in collaboration with experts and practitioners (Plomp & Nieveen, 2007). When determining plausible solutions, especially virtual ones, a number of factors must be considered by designers, such as technology accessibility (i.e., hardware and/or software), end-user and practitioner access to required technology, other resource availability, etc.
A virtual simulation and simulator might be appropriate in situations where there is a low-frequency of events, when resources need to be mitigated, when use-cases are time-specific (Carruth, 2017), or when there is a need to prepare for on-the-job performance because incorrect performance on the job could lead to serious consequences (Baker & Jenney, 2023; Chernikova et al., 2020). Virtual simulations and simulators are also viable options if learners need to engage in real-world situations with genuine interactions and real or virtual objects or persons (Baker & Jenney, 2023; Chernikova et al., 2020).
At this point, an assumption is made that a designer has identified a virtual simulation as a viable solution to address a performance problem. Possible virtual simulation solutions might be based on scope (i.e., narrow vs. comprehensive simulation), technology (i.e., completely virtual or hybrid simulation experience, etc.), or a different factor altogether. After selecting possible virtual simulation options, designers should generate a list of the pros and cons of each intervention by engaging in a type of risk assessment to determine what intervention might be best. Questions such as: What plausible risks are associated with each virtual simulation intervention? What are the strengths of each intervention? Are there any other alternative options that might be a better intervention, given the risks identified?
Designers should work closely with stakeholders to collaboratively decide on whether a simulation/simulator should be created in lieu of a different intervention option. Table 4 offers questions to aid designers in thinking through design solutions.
Table 4
Initial Design Solution Questions
Design Solution Questions(Own elaboration based on Vafa, 2013; Stefaniak, 2020; adapted for broader organization setting) | |
Initial Design Questions |
|
Identifying Intervention Goals |
|
Like the previous tables, the questions listed in Table 4 are not exhaustive. Designers can use the questions in the table, make changes to the questions, or generate new questions to meet their needs. Regardless of the questions used in this process, the end-result of this process should lead designers to select a viable intervention and prepare them to move onto Phase 2. After an agreement is reached on the initial design solution, designers should ensure they engage with a project manager to support further alignment, communications, and project management, or engage in these facets themselves (Wiley, 2018).
Budgeting for technology with virtual simulation. A related part of initial design solution selection is examining resources and potential project constraints. During this process, designers should identify tools, resources, timelines, and the available project budget (Wiley, 2018). Budgeting or tool constraints are often critical indicators of whether a simulation/simulator is a viable option (Chernikova et al., 2020). For instances, the use of virtual reality (VR) might be beneficial in certain settings, but to option high-quality hardware (e.g., efficient graphics cards, accurate tracking systems, high-resolution-displays, etc.), can yield to a high cost design and implementation, which could make the intervention too expensive for some (Baniasadi, 2020). Consequently, designers should scope their access to simulator technology and develop a plan for resource needs to ensure all aspects of their design solution are available before a project is launched. If tools, resources, or necessary aspects for a project are available, designers should work closely with stakeholders to ensure a common understanding of the plausible solution (i.e., virtual simulation) as well as goals to ensure agreement with the initial design solution (Wiley, 2018). If required aspects of a project are not available, designers should negotiate project needs with stakeholders or make adjustments to the initially selected intervention.
Once a plausible intervention is identified (i.e., virtual simulation), a focused literature review should be conducted to select appropriate learning theories to use within the intervention (Mufumiko, 2007; Sahasrabudhe, Murthy & Iyer, 2012). For example, consider a designer who is interested in productive failure as a learning theory for virtual simulations. This designer should review external literature to better understand the core components of the theory of productive failure, such as the knowledge and exploration phases (Kupar, 2008; 2016), as well as potential limitations with the learning theory (i.e., the degree of end-user comfort with failure or repeated failure) (Juul et al., 2013). In this example, the designer should also examine literature to determine whether previous DBR or relevant studies have created a simulation using productive failure as a learning theory to identify lessons learned, strategies, design principles, or other relevant practices.
In addition to learning theories, principles related to virtual simulations/simulators should be identified. Designers should look for literature addressing simulation/simulator interventions and explore design principles (e.g., type, technology, interaction, duration, etc.) to aid in developing the intervention (Makransky & Petersen, 2021; Chernikova et al., 2020). Designers should examine literature addressing the impacts of learning on cognitive, emotional, and behavioral processes vis-à-vis simulations, where possible (Juul et al., 2013). Designers might consider the environmental setting of simulations. For example, virtual simulations can be completely immersive in a VR or HMD environment, or include a mixture of role-plays while using a simulator to complete a series of actions or tasks (Makransky & Petersen, 2021; Chernikova et al., 2020); there are a number of options for simulation design. Table 5 provides a list of questions to aid designers look for specific elements pertaining to learning theories and simulation/simulator design principles when searching for external literature.
Table 5
Questions to Explore During Literature Review
Literature Review Questions | |
Learning Theories (Own elaboration based on Mufumiko, 2007; Sahasrabudhe, Murthy & Iyer, 2012) |
|
Design Principles for Simulation/Simulator(Own elaboration based on Chernikova et al., 2020) |
|
Designers can use these questions to guide their research process, as needed. Overall, a literature review is key to aiding designers in considering design, learning, and simulation principles. Designers should make every effort to ensure learning theories and design principles for simulation/simulator align to the practical problems identified during the front-end and end-user analyses. The result of the literature review process should aid designers select at least one learning theory to use as a framework for designing and developing a virtual simulation/simulator, as well as several virtual simulation/simulator design principles.
The information in Phase 2 is written based on the assumption that a designer has selected a virtual simulation (i.e., scenario) and simulator (i.e., technological tool used for interaction in simulation) as a design solution. There are several important aspects related to the design and construction of a simulation/simulator that designers should contemplate in this phase, including initial design elements, prototype development, expert-review cycles, and the number of iterations required before implementation. This phase is the largest and most labor-intensive of the phases in the framework. There are multiple outputs of this phase including an outline of intervention, expert reviews used to develop the simulation, curated use-cases, and a simulation/simulator artifact.
Intervention or curriculum mapping is a strategy used to design and link outcomes with relevant learning materials (O’Rourke et al., 2019; Ambrose et al., 2010; Harden, 2001). Creating a high-level intervention outline that focuses on mapping the overall intervention goals and specific learning outcomes (LOs), as well as intervention materials, can make proposed design solutions transparent for stakeholders, management, and experts. A well created intervention map should integrate learner needs and connect the LOs, assessment, activities, and instructional materials back to the organizational goals (Ambrose et al., 2010; Harden, 2001). The learning objectives should directly address the performance gaps identified in the prior analyses (O’Rourke et al., 2019). The focus of the map should include details on the virtual simulation and relevant instructional materials, activities, and assessments related to the simulation/simulator. Refer to Table 6 for more details on designing intervention or curriculum maps.
Table 6
Intervention Map Template
Intervention Map with Alignment(Perez, 2020; O’Rourke et al., 2019; Ambrose et al., 2010) | ||||
Organization Goals | Learning Objectives (LOs) | Assessment | Activities | Instructional Materials |
Goal 1 | Objective 1 | [Describe assessment and include relevant resources and technology needed] | [Describe learning activities and include relevant resources and technology needed] | [Describe instructional materials and include relevant and technology resources needed] |
Goal 2 | Objective 2 | [Describe assessment and include relevant resources and technology needed] | [Describe learning activities and include relevant resources and technology needed] | [Describe instructional materials and include relevant and technology resources needed] |
Objective 3 | [Describe assessment and include relevant resources and technology needed] | [Describe learning activities and include relevant resources and technology needed] | [Describe instructional materials and include relevant and technology resources needed] | |
Goal 3 | Objective 4 | [Describe assessment and include relevant resources and technology needed] | [Describe learning activities and include relevant resources and technology needed] | [Describe instructional materials and include relevant and technology resources needed] |
Designers can use this map or create their own version. Regardless of the format, it is essential that the organizational goal(s) and learning objectives are present in any map. Designers should reference the results of the front-end analysis to identify organizational goals. Each relevant column should be filled with enough details that will make the map easy to understand and follow, especially for laypersons, who might be stakeholders, managers, staff, etc. Stakeholders and other map reviewers will benefit from clear and concise details with terms that are spelled out or easy to understand. The results of this activity should include a visual outline of the virtual simulation/simulator intervention with horizontal alignment across each row or element of a designer’s map.
Once the map has been filled in and completed, designers should conduct a stakeholder review (i.e., management, experts, other stakeholders). Stakeholder review sessions are an important element to any type of intervention. The point of the review may vary, depending on the need or function (Mufumiko, 2006). For instance, the first review might be used to gain approval to move forward with the project, or it might be used to gain feedback before another review session. Irrespective of purpose, providing intervention transparency early in the design process can help stakeholders gain a clearer picture of the initial design solution (Harden, 2001) and ensure stakeholders are aligned with the solution (Tran et al., 2021). Gaining alignment earlier in the process can save designer time in the later stages of Phase 2.
Whether it is the first or last review, relevant feedback should be integrated by designers or at least considered before moving forward with prototype creation (Tran et al., 2021). If stakeholders do not approve the intervention map, designers must work closely with stakeholders to determine what aspects of the intervention stakeholders have concerns about. Designers must negotiate with stakeholders to reach a solution; this could mean making minor edits (i.e., changing an assessment) or major revisions (i.e., re-starting the front-end analysis) to the proposed intervention.
To ensure the identified design principles and theories are integrated into the virtual simulation, an outline of strategies for use and function should be created prior to building the intervention. Once designers have received approval to move forward with a simulation/simulator, they should re-review the external literature reviewed in Phase 1. In this activity, designers should select, outline, and explain what type of theoretical and/or learning principles they will use within the simulation/simulator. Designers can use Table 7 to aid in mapping and aligning their final decisions for learning theories, principles, and relevant materials for the virtual simulation/simulator.
Table 7
Mapping Design Principles
Theory (Vanderhoven et al., 2016) | Principles | Application to Materials |
Theoretical or learning concept 1 | Attributed design principle 1 | [Explanation of strategy or use of principle in the intervention]. |
Attributed design principal 2 | [Explanation of strategy or use of principle in the intervention]. | |
Attributed design principal 3 | [Explanation of strategy or use of principle in the intervention]. | |
Theoretical or learning concept 2 | Attributed design principal 1 | [Explanation of strategy or use of principle in the intervention]. |
Attributed design principal 2 | [Explanation of strategy or use of principle in the intervention]. | |
Simulation/simulator design concept 1 | Attributed design principal 1 | [Explanation of strategy or use of principle in the intervention]. |
Attributed design principal 2 | [Explanation of strategy or use of principle in the intervention]. | |
Simulation/simulator design concept 2 | Attributed design principal 1 | [Explanation of strategy or use of principle in the intervention]. |
Attributed design principal 2 | [Explanation of strategy or use of principle in the intervention]. |
This map can serve as a referral tool for designers to constantly refer back to, as they build their own virtual simulation. Designers might reduce or add to the map, based on their own design needs. Stakeholders may be interested in an outline or map as illustrated in Table 7; given stakeholder characteristics (e.g., time, interest, etc.), designers should determine whether or not to include a map within a stakeholder review session.
Real-world scenarios, also known as use-cases, are an important component of simulation design (Chernikova et al., 2020). Former end-users or experts (e.g., former students, current employees, practitioners, managers, etc.) can be invaluable resources to curate real-world examples and should be leveraged to obtain simulation scenarios. Curating real-world scenarios is a key building block to simulation development. A well-designed scenario might include a script, simulation technical document, and equipment needs with support from individuals such as directors or writers, production staff, simulation technicians, or visual designers such as visual effects (Harrington & Simon, 2022). Table 8 offers strategies to aid designers curate real-world scenarios.
Table 8
Curation of Real-World Scenarios
Real-World Scenario Curation(Own elaboration based on Pandey, 2019; Pappas, 2016) | |
Stages | Details |
Align | Designers should start the real-world content curation process by referencing the results of the front-end analysis, end-user analysis, and intervention map. Focusing on the goals and objectives of the intervention are critical to aligned content curation. |
Identify | Designers can curate stories, scenarios, or use-case content from multiple sources, but it is recommended that reliable sources (i.e., experts, former students, tenured employees, or managers) be used as the primary curation point, where possible. Designers should focus on gathering information based on an actual situation as well as the actions that end-users should take to complete, improve, or change a given situation. Qualitative approaches and questioning techniques are ideal for content curation. Designers should aim to collect information from reliable sources that address the goals and needs of the simulation (i.e., legal requirements, life-threatening situation, critical procedures, etc.) as well as contextual stories or experiences to help shape a real-life scenario for learners (i.e., experiences, tips, best practices, stories about the most challenging and important situations, etc.). Designers might not only collect text-based information but also images, videos, or other relevant multimodal information from reliable sources to aid designers get a sense of visual, experiential, and tactile information. Designers should collect as much information as possible, until they reach the point that they perceive that enough unique or repeated information has been collected. |
Distill | After collecting the data, designers should work closely with experts to distill the data down to the most relevant and valuable content for end-users. This process might be time-consuming, but it is important to ensure end-users are provided with essential details. During this stage, designers should work with experts to not only identify macro-level aspects of a scenario but micro-level aspects of a scenario, such as specific skills and tasks embedded in the scenario. |
Contextualize | Designers should also work with experts to ensure the necessary context is embedded into the content. This might require some content to be transformed to ensure the right information is provided to end-users. Contextualization is critical, as it can aid in making sure the content is rooted in real-world scenarios. |
Integrate | Once the content is identified, designers should generate a plan to integrate the information into the simulation. Integration can take on multiple forms (i.e., simulation journey, problem prompt, etc.). |
The process of writing a simulation scenario should be systematic, and align to the intended simulation objectives (Harrington & Simon, 2022). Designers might work towards scenario integration by writing the story or scenario on paper or digital storyboarding or by working with a director or writer to build out the scenario. Stakeholders or the individual responsible for providing the scenario might use to review the scenario or story and check for accuracy and relevancy. Designers should be responsible for assessing the scenario in relation to the learning objectives. Once the scenario is completed, designers might work with production staff, simulation technicians, or visual designers, if available, to build the scenario into the simulator. In addition to developing the actual scenario, designers might also consider creating a brief introduction to the simulation to orientate end-users to the simulation environment including the expectations for performance, environment details, equipment, support, and whether or not the simulation is for educational or assessment purposes (Harrington & Simon, 2022).
If a simulation is being built for globally-based end-users, designers should identify and work with either practitioners or experts within each region or country to aid in sourcing scenario content to align scenarios with end-users’ culture, language, policies, and/or practices. For example, a designer is building a driving simulation/simulator for a large construction vehicle organization. Within the simulation, the on-screen equipment is the same across four countries, but the drivers are required to practice unique traffic laws within each country. If the focus of the simulation is to practice driving, the simulation might need to be designed differently for drivers in one country compared to the next, based on traffic law differences. Designers should engage with practitioners or experts within each country to obtain unique details pertaining to the country’s traffic laws. While traffic laws may be unique, designers might consider selecting and integrating globally relevant real-world scenarios into a simulation, where possible.
Curating real-world scenarios is important to a well-built simulation. Designers should pay special attention to collecting the “right” scenarios. Once macro-level aspects (e.g., story, experience, or use-case) are finalized by experts, designers should pay special attention to micro-level aspects, such as specific tasks or actions that they should engage with or be required to consider when participating in the simulation. Designers should take time to document a range of tasks and actions identified from the curated content; they should also involve experts to ensure identified tasks and actions are correct and rooted in practical and real situations that learners might face outside the learning environment.
After generating a curriculum map, corroborating design principles, and curating use-cases, designers should begin to create a design and develop a simulation/simulator prototype. Several important elements for prototype creation include the following:
Review and reference data for design. Based on the data collected during the first and second phases, designers should reference their intervention and design principal maps, ensuring the goals of the intervention, the objectives of the simulation, desired learning theories and design principles, real-world scenario content, and required tasks or actions are integrated into the simulation. Designers should also reference the desired simulation principles (i.e., duration, authenticity, tools, etc.) that were previously selected. All decisions when building the simulation should be informed by 1/ front-end analysis data, 2/ defined problems, and 3/ curated scenarios, where applicable.
Design/develop initial simulation prototype. There are a number of different ways designers can go about designing and developing a simulation. This article does not present an exhaustive list of ways to build a prototype, but it offers guidance to designers on core elements of the design and development process. In terms of design, there are several key parts including the scenario(s), constraints, and learner skills or actions.
When designing a simulation, designers should have access to at least one or more curated real-world scenarios. Not all simulations need to be high-fidelity (i.e., closely mirroring real-life), but simulations are ideal when they showcase real-life situations (Chernikova et al., 2020), whether by using a story to frame a problem, presenting a scenario in which learners need to identify a problem, assess a situation, interact with actors, react appropriately to a challenging environment, provide care, complete a required task, or the like (Baker & Jenney, 2023). Well-designed scenarios might be based on auditory, text-based, visual (i.e., moulage), or a mixture of multimodal elements. Designers should ensure virtual simulation scenarios guide or direct learners to complete the desired objectives (Harrington & Simon, 2022). To build the real-world scenario, designers might need to create scripts, create digital actors or obtain real actors, or generate some type of visual experience. Importantly, the simulation scenario should systematically align to the business goals and learning objectives. Designers should work backwards with the organizational goals in mind and learning objectives and ensure the simulation scenario and desired learner actions or tasks align to the goals and objectives of the overall intervention.
After selecting a scenario and deciding on what the scene or situation will be, designers should identify intended constraints such as what aspects of the real-world scenario should be included and what information, details, or actions should be left-out. In particular, designers should decide what elements of the work or learning environment are most pertinent for the simulation, given the intervention goals and objectives (Hill, 2023). Designers should also decide whether breaks or a running event would be most appropriate for the simulation.
Once the constraints of the simulation have been defined, the designer should create specific learning tasks or activities that end-users are required to perform in order to complete the simulation; the tasks or activities are the core events that make up the simulated environment, and it is critical for designers to complete a simulation that has a certain degree of difficulty, to allow learners to learn and grow through the experience (Hill, 2023).
Once the scenario(s), constraints, and skills have been defined, a designer should construct the simulation in detail. For a prototype, it is recommended that the initial learning experience be storyboarded and outlined in a document, preferred tool, or web-based software that can show-case a prototype but not require development of the complete simulation (Harrington & Simon, 2022; Plomp & Nieveen, 2007). Basic prototypes are ideal because designers can share their vision with stakeholders and/or experts and receive quick feedback to aid in the development process. Alternatively, complex simulations often contain a number of elements, materials, learner interactions, and actions or tasks. Leveraging experts to help define boundaries, test scenarios, and aid with design aspects through initial storyboarding can serve to help designers to ensure they are on the right track and make quick changes before investing too much time and energy into development work that may need to be reworked during a later stage.
Design/develop simulator initial prototype. Building with a simulator is not always necessary during the initial storyboarding work, but the specific type of technological tool (i.e., hardware and/or software) that end-users are expected to interact with, either physically or virtually (Baker & Jenney, 2023; Chernikova et al., 2020), should be referenced in the storyboard or early prototype. Details regarding the technological tool use and need should also be referenced, such as whether the virtual simulation will require a screen, physical actor (e.g., mannequin), or some other form of technology (Baker & Jenney, 2023; Chernikova et al., 2020). Virtual simulators can be low or high fidelity, depending on the degree of authenticity required (Chernikova et al., 2020), and early simulator development should test the degree of fidelity required to identify whether aspects of realism within the simulator are distracting or not (Hill, 2023).
Options for learning theory or design principle integration should be explored with a virtual simulator. For example, a designer decides to use productive failure as a learning theory within a simulation. When integrating this theory into a simulator, a designer might consider whether the available technology can use a virtual agent to present guidance or feedback to end-users as compared to a facilitator (Chernikova et al., 2020). To support designers with initial prototype development, Table 9 details important elements to consider in the design and development of a simulation and simulator intervention.
Table 9.
Design Process for Simulation and Simulator Creation
Simulation Design (Own elaboration based on Tamim et al., 2011; Chernikova et al., 2020) | ||
Component | Content | Degree of Fidelity (Low or High) |
Learning Outcomes | [Insert LO(s)] | |
Required End-User Tasks or Actions | [Define complex skills or actions related to learning outcomes (e.g., communication skills, troubleshooting] | |
Intervention | [Insert intervention [e.g., simulation] | |
Type of Simulation | [Define context and interaction (e.g., real or virtual object and real or virtual person)] | |
Type of Simulated Environment | [Briefly detail the nature of the simulated situation and environment involved and describe the extent to which it represents actual practice] | |
Creation Techniques | [Insert learning theories; design principles] | |
Instructional Strategy | [Insert strategy (e.g., role play, virtual reality)] | |
Use-Case/Real-World Examples/ Scenarios | [Insert curated use-case from expert(s)] | |
Duration of Simulation | [Insert duration and consider alignment with use-case timeframe (e.g., 1 hour, 1 day, etc.)] | |
Role of Facilitator(s) | [Define role of facilitator in simulation] | |
Role of End-User | [Define role of end-user in simulation] | |
Simulator Design(Own elaboration based on Tamim et al., 2011; Chernikova et al., 2020) | ||
Component | Content | Degree of Fidelity (Low or High) |
Function of Simulator | [Define type of interaction needed to complete simulation; physical or virtual interaction with people or objectives] | |
Required End-User Tasks or Actions | [Define complex skills or actions related to learning outcomes that will be practiced with the simulator (e.g., communication skills, troubleshooting] | |
Technology | [Insert technology for simulator creation (e.g., LMS, software, hardware)] | |
Function of Guidance | [Describe if guidance will be integrated into the simulator and define the type of guidance (e.g., practitioner or virtual agent) | |
Timing of Simulator | [Determine when simulator should be utilized within simulation (e.g., beginning, end, throughout) | |
Duration of Simulation | [Insert duration of simulator vis-à-vis simulation, also consider alignment with use-case timeframe (e.g., 1 hour, 1 day, etc.)] |
Quality criteria. Quality criteria are important in the prototype design process. There are three criteria for prototype design: validity, practicality, and effectiveness (Nieveen, 1999; Mafumiko, 2006). Validity is defined by three areas: content validity (i.e., prototype based on knowledge), construct validity (i.e., linking of components in prototype), and task validity (i.e., prototype contains real-world practice) (Mafumiko, 2006). Practicality is defined as the degree of usability of the prototype, while effectiveness is defined as the degree of alignment with the intended learning outcomes (Mafumiko, 2006). Arguably, all three elements should be used to support the development process. Designers and stakeholders can use Table 10 to review and determine simulation/simulator quality, based on the aforementioned criteria listed in Table 9.
Table 10
Quality Criteria
Validity(Own elaboration based on Mafumiko, 2006) | ||
Topic | Definition | Met or Not Met |
Content Validity | Prototype is based on previously established knowledge. | |
Construct Validity | The components within the prototype link together. | |
Task Validity | Prototype contains real-world practice opportunities. | |
Practicality(Own elaboration based on Mafumiko, 2006) | ||
Topic | Definition | Met or Not Met |
Degree of Usability | The extent to which the prototype is usable. | |
Effectiveness(Own elaboration based on Mafumiko, 2006) | ||
Topic | Definition | Met or Not Met |
Degree of Alignment | The degree of alignment between the intended learning outcomes and actualized outcomes of the learning experience. |
Once initial prototype reviews are conducted with experts and/or stakeholders, designers should integrate feedback and pivot from storyboarding or a basic prototype to creating the simulation using the simulator. In this instance, the simulation will likely increase in complexity, and additional aspects such as learning assessment, on-the-job applications, or other related activities might need to be developed and included within the simulation/simulator or outside the simulation experience, depending on designer decisions, stakeholder requests, or expert recommendations. Once the first version of the simulation is completed via the simulator, designers should engage several rounds of review and revision.
Reviews and feedback sessions are one of the most important aspects of prototype development. Designers should schedule review sessions with experts, stakeholders, and end-users (Tessmer, 2013) well in advance. Designers might benefit from creating work-back plans, where designers plan their development work based on a review due-date. There are several different review strategies, and it is recommended that designers integrate multiple groups into the review and feedback process to aid in prototype iteration and development.
Iterative cycles and feedback. Consistent with DBR research, the designed prototype should complete a series of cyclical iterations with different user groups (Reeves, 2000; Wademan, 20007; Mafumiko, 2006). Following Mafumiko’s (2006) model, several iterative reviews should take place with experts and other relevant groups when designing a new prototype. Given the fluid nature of DBR, review cycles are expected to have a degree of flexibility and evolution (Kennedy-Clark, 2013). Documentation and integration of feedback is critical in the review cycles and prototype iterations, and the number of prototypes may differ depending on project duration and timelines (Tessmer, 2013). Regardless, detailing changes with each phase is critical, as designers should be able to showcase the results of iterative evolution to stakeholders after the completion of all review cycles (Vanderhoven et al., 2016). Once reviews are completed, designers should finalize the prototype for larger scale implementation. Table 11 depicts a process for highlighting the evolution of prototypes, based on expert/end-user reviews.
Table 11
Evaluation and Prototype Interaction Visual Representation
Review Cycle(Own elaboration based on Vanderhoven et al., 2016; Mafumiko, 2006) | ||||
Prototype 1 (Review): | Prototype 2 (Revision 1): | Prototype 3 (Revision 2): | Prototype 4 (Revision 3): | Prototype 5 (Revision 4): |
|
|
|
| Release to Learners |
[Provide a high-level overview of elements included in the first simulation] | [Provide a high-level overview of elements and list of revisions made from first simulation] | [Provide a high-level overview of elements and list of revisions made from second simulation] | [Provide a high-level overview of elements and list of revisions made from third simulation] | [Provide a high-level overview of end-product and list of revisions made from fourth simulation] |
[Provide a high-level overview of elements included in the first simulator] | [Provide a high-level overview of elements and list of revisions made from first simulation] | [Provide a high-level overview of elements and list of revisions made from second simulation] | [Provide a high-level overview of elements and list of revisions made from third simulation] | [Provide a high-level overview of end-product and list of revisions made from fourth simulator] |
Designers can use Table 11 as a review cycle guide to aid them in scheduling and tracking simulation/simulator reviews and feedback. It is important to note that the Prototype 1 review should not be confused with the initial storyboard prototype, which was conducted at an earlier stage of the design and development process.
While there is no right or wrong number of reviews, designers should engage in at least a minimum of two or more prototype reviews before implementing the final intervention. When integrating review cycles into the instructional design process, designers should determine the type and number of expert reviews needed and ways to collect data from expert review sessions. The following sections provide more insight into the review process and elements.
Review type and amount. The amount and type of experts might vary. One DBR study suggested including a variety of different kinds of users/experts (Sahasrabudhe, Murthy & Iyer, 2012). A strategy to determine the type of experts is to consider the prototype evaluation criteria: content validity, construct validity, task validity, degree of practicality, and degree of effectiveness (Dick et al, 2015; Calhoun et al., 2021). Identifying individuals for both the virtual simulation (scenario) and simulator (technological tool) based on content or topic expertise (Dick et al., 2015); process, action, or task expertise; usability (UX) expertise (Krug, 2014), practical and real-world situation expertise, media or technological (hardware/software) expertise, and instructional (ID)/learning experience (LX) design expertise might be beneficial in the review cycles. Designers should consider whether internal or external stakeholders should be incorporated into the review.
Three experts, as recommended by Mafumiko (2007) may or may not cover the range of knowledge and skills needed to provide feedback based on the aforementioned items. Documenting the type of expertise involved in each review and the number of experts is valuable for increased transparency and better identifying the function and feedback of experts (Sahasrabudhe, Murthy & Iyer, 2012). Similarly, conducting reviews with end-users is a popular practice in DBR studies, and the number of end-users within each or specific review cycles may depend on the project scope and access to participants (Calhoun et al., 2021; McKenney, 2001; Mafumiko, 2007; Plomp & Nieveen, 2007). Designers should consider inviting a small group of end-user participants to try out portions or the entire simulation (Tessmer, 2013; Plomp & Nieveen, 2007). If end-users are not available, designers might leverage current employees, previous learners, or managers; the number of participants in the review cycles should be detailed (Sahasrabudhe, Murthy & Iyer, 2012) and shared with stakeholders.
Review delivery strategies. Designers should consider how review or evaluation cycles can increase the quality of a prototype (Mafumiko, 2006; Sahasrabudhe, Murthy & Iyer, 2012). These cycles might be conducted asynchronously (e.g., surveys, questionnaires, diary, etc.) or synchronously (e.g., live review, interview, observation, etc.), or individually or in groups, depending on nature and scope of simulation and simulator and availability of experts and end-users as well as stakeholder preference (Plomp & Nieveen, 2007). Designers should invite experts purposefully and determine the number of expert participants needed in relation to the number of desired review cycles (Tessmer, 2013; Plomp & Nieveen, 2007).
Naturally, after each review cycle, designers should iterate and make changes, where appropriate and relevant, to the prototype. After the desired number of review sessions and iterations are implemented, designers should prepare the prototype for launch (Calhoun, 2021; Tessmer, 2013). This might include working with stakeholders, training teams, managers, etc. to ensure the prototype is ready to be implemented within a workforce. Designers should also consider ways to obtain data to best evaluate the prototype, whether through surveys, focus groups, or the like. Evaluation documents and materials should be finalized before the implementation of the simulation/simulator, so data can be captured as soon as possible.
The final phrase, Phase 3, of the conceptual framework focuses on the review and reflection of learner data, the selection of key design principles from the design and development process, and the decisions on whether to further iterate the simulation and/or simulator.
Reflection, post-intervention implementation, invites designers to engage in critical research and data analysis (Reeves, 2000) by selecting an evaluation model (Calhoun, 2021). Designers should take advantage of data collection post-intervention implementation to explore how the intervention worked with an increase in participants. Quantitative and/or qualitative data (e.g., formative/summative assessments, final exams, satisfaction surveys, pre/post-tests, etc.) might be collected and analyzed after several weeks or months after the end-user experience, depending on the designer’s selected evaluation methodology. There are a number of employee performance and learning evaluation models used across organizations and industries. The Kirkpatrick evaluation model is, arguably, one of the more popular models within organizations (Peck, 2019; Calhoun, 2021). Designers should select an evaluation model that allows them to measure or assess the organizational goals and simulation learning objectives. After gathering, analyzing, and reflecting on the feedback and lessons learned, designers should create a report that includes key evaluation metrics or details selected by the designer to aid stakeholders identify the value of the intervention. In some instances, organizations require data to justify an intervention’s return on investment (Leone, 2020).
Making a theoretical contribution is a fundamental part of several DBR models. Examples of theoretical contributions include the creation of design guidelines (Reeves, 2000), DBR models (Wademan, 2007), design principles (McKenney, 2001), or a final project with empirical data (Mafumiko, 2007). If designers choose not to generate a theoretical contribution externally, designers are encouraged to share their findings internally. One way to do so is for a designer to create a report that illustrates one or more theoretical contributions and share the report with relevant teams or individuals within their organization. Sharing findings, lessons learned, or strategies for the design and development of a simulation/simulator can benefit individuals internally or externally.
Based on the feedback received, further simulation/simulator iterations may or may not be necessary. Designers should consider the types of feedback received (i.e., extensive, moderate, or small revisions suggested) and cross-reference project resources and timelines, as well as the severity and urgency of feedback integration. Other types of iterations in this stage could also be related to language translation, globalization, or prototype expansion. Designers should determine the extent of resources, severity of feedback, or the extent to which additional simulation/simulator iterations may be immediately required or planned in the future.
Simulation-based learning can be a valuable tool for learners (Whitworth et al., 2018), and with a lack of literature illustrating design and development processes, this article presents a framework that designers can engage with to aid the design and development of a simulation/simulator. While the design and development process is inevitably messy and not always linear, there are several strategies for using the framework that deserve emphasis. First, designers are strongly encouraged to conduct a front-end analysis and end-user analysis as the first step in using the framework. Second, after completing the initial analyses, the elements within each phase of the framework might not necessarily be followed in a linear fashion. For example, while engaging in a literature review might benefit designers within their initial solutioning work, as outlined in Phase 1, designers might find value in engaging in a literature review throughout each phase in the framework, rather than exclusively during Phase 1. Third, conducting reviews is critical to each phase in the framework. Albeit, there are numerous review cycles, engaging in multiple reviews within each phase is crucial for influencing the final simulation/project intervention. Simulation/ simulator reviews will likely enhance an intervention, and designers should plan review sessions ahead of time to ensure reviewers are not caught off-guard with participation requests; planning for reviews ahead of time and gaining review buy-in can aid designers move through the review process with agility and ease. Overall, designing a simulation/simulator is not an easy task, but systematically approaching the design and development process can help designers break down a seemingly complex and time-confusing endeavor into manageable and meaningful segments.