EdTech Archives EdTech Archives Proceedings of the Learning Engineering Research Network Convening (LERN 2026)

The Difficult Conversations Bot: Findings on Fostering Empathy and Reflective Communication Among Faculty and Staff

Catheryn Reardon & James Dunnigan

Abstract

This paper presents final findings from the design and evaluation of the Difficult Conversations Bot, an AI-mediated role-play system developed to support empathetic communication among higher education faculty and staff. Grounded in Learning Engineering and informed by humanistic, social cognitive, and adult learning theories, the system was evaluated using a pre–post mixed-methods design. Quantitative survey results demonstrated consistent positive shifts in confidence, intentional perspective-taking, psychological safety, reflective practice, and perceptions of AI as a learning partner. Qualitative findings highlighted the value of practicing a difficult conversation in a psychologically safe rehearsal simulation with structured reflection. Together, these results suggest that AI-mediated role-play, when designed through a nested learning engineering cycle, can function as an adaptive, human-centered learning system for complex interpersonal skill development.

Keywords: Learning Engineering, AI-Mediated Role-Play, Human-Centered AI.

Introduction  

Faculty, staff, and K–12 teachers routinely navigate emotionally complex interpersonal conversations with students, colleagues, and, in K–12 contexts, parents. These conversations often include responding to student distress, managing peer conflict, addressing academic integrity concerns, and communicating expectations within layered organizational structures. These interactions are not peripheral to higher education work; they are central to student persistence, faculty effectiveness, staff well-being, and institutional climate. Yet professional development opportunities rarely provide structured, low-stakes environments in which educators can rehearse difficult conversations and reflect on their communicative choices before encountering them in real contexts.
                The absence of rehearsal opportunities is especially consequential given the increasing complexity of higher education roles. Faculty and staff are expected to manage emotionally charged conversations related to mental health, equity and inclusion, performance feedback, and institutional change, often under time pressure and with limited guidance. Traditional professional development formats, such as policy briefings or one-time workshops, tend to emphasize procedural compliance rather than the nuanced interpersonal judgment required in these difficult conversations.
                To address this gap in training, the Difficult Conversations Bot was developed as an AI-mediated role-play system designed to support empathy, reflective communication, and interpersonal awareness while rehearsing a difficult conversation. The project is grounded in Learning Engineering, which conceptualizes educational technologies as adaptive learning systems developed through iterative, data-informed cycles (Baker et al., 2022), and Arizona State University’s Principled Innovation framework, which emphasizes human-centered, ethically grounded design (Arizona State University, 2022). Rather than positioning AI as a productivity tool, this work frames AI-mediated role-play as a learning system that supports deliberate practice, meaningful interpretation, and reflection in emotionally complex professional contexts.

Framework and Design Process

The Difficult Conversations Bot was designed using a nested Learning Engineering cycle, embedding local design decisions and evaluation activities within a broader research and implementation effort (Craig et al., 2025; Totino & Kessler, 2024). This nested structure enabled the research team to iteratively refine the system while maintaining traceability between theoretical commitments, design choices, and learner data. Within Learning Engineering, effectiveness emerges not from isolated features but from alignment across theory, design, implementation, and evaluation.
                The system integrates multiple complementary theoretical perspectives. Rogers’ (1959) humanistic theory shaped the conversational tone and emphasis on empathy, unconditional positive regard, and authentic listening. Bandura’s (1986) social cognitive theory guided the emphasis on rehearsal, modeling, and reflective self-efficacy, positioning communication skill development as experiential and iterative rather than declarative. Guided by Mezirow’s (1991) transformative learning theory, structured reflection prompts were incorporated to support critical reflection and perspective shifts following each simulated interaction. Appreciative Inquiry (Cooperrider & Srivastva, 1987) framed strengths-based feedback that encouraged learners to identify effective strategies rather than focus solely on deficits. Bot training and implementation drew from the Reasoned Action Approach (Fishbein & Ajzen, 2010) and organizational change models such as ADKAR (Hiatt, 2006), shaping role-play structure to help learners practice intentional perspective-taking, action planning, and readiness for change in simulated interactions.
                Design followed an iterative design-build-test-refine process consistent with learning engineering practice. Early prototypes were reviewed internally to evaluate emotional realism, ethical alignment, and conversational coherence. Pilot testing with faculty and staff informed refinements to scenario framing, instructional guidance, pacing, and post-dialogue reflection prompts. Each iteration drew on learner feedback and survey data to support systematic refinement across nested learning-engineering cycles and transparent design decisions.

Implementation and Methods

Following Institutional Review Board (IRB) approval, the Difficult Conversations Bot was tested using a pre–post mixed-methods design aligned with learning engineering evaluation practices. Participants were faculty and staff affiliated with a large public university who voluntarily engaged with the Difficult Conversations Bot as part of a research study on AI-mediated professional learning.
        Participants completed a pre-survey assessing baseline experience with generative AI, confidence in navigating difficult conversations, and expectations for AI-supported learning. This baseline data established participants’ starting conditions and helped distinguish learning-related change from novelty effects or lack of prior exposure. Participants then engaged asynchronously with one or more AI-mediated role-play scenarios via a web-based interface.
        Scenarios were designed to reflect common professional challenges in higher education, such as responding to a distressed student, addressing ongoing performance concerns with a colleague, or navigating supervisory expectations. The AI system generated contextually adaptive responses while maintaining emotional tone and conversational continuity. After completing the role-play, participants completed a post-survey consisting of 1–5 Likert-scale items and several open-ended reflection prompts.
        Quantitative measures assessed confidence, intentional perspective-taking, psychological safety, reflective practice, and perceptions of AI as a learning partner. Survey constructs were grounded in the principles of Principled Innovation, emphasizing human-centered design, ethical reflection, psychological safety, and responsible AI use. Items were designed to capture learners’ perceptions of reflective engagement, perspective shifting, and confidence in applying insights from the simulated interactions. Quantitative data were analyzed descriptively to examine pre–post shifts, while qualitative responses were analyzed using inductive thematic analysis to identify patterns relevant to learning processes and system design.

Findings and Insights

Pre-survey results indicated moderate to high baseline familiarity with generative AI and existing confidence in handling difficult conversations, alongside uncertainty regarding AI’s role in supporting relational and emotionally complex learning. A total of 20 participants completed the pre-survey, and many initially viewed AI primarily as an efficiency or information-support tool rather than as a medium for interpersonal skill development.
        Sixteen participants completed the post-survey following engagement with the AI-mediated role-play (three participants did not complete the post-survey due to incomplete participation in the full activity sequence). Post-survey results demonstrated consistent positive shifts across all measured dimensions, including confidence, intentional perspective-taking, psychological safety, reflective practice, and perceptions of AI as a learning partner. These shifts suggest learning gains attributable to the design of the role-play as a reflective learning system rather than novelty effects. Participants reported greater intentionality in language choice and increased awareness of how conversational framing influenced outcomes.
        Qualitative findings reinforced quantitative trends. Participants described the psychologically safe rehearsal environment as particularly valuable, noting that the ability to experiment without real-world consequences reduced anxiety and supported deeper reflection. Several participants highlighted moments of perspective shift in which they recognized assumptions or habits that might otherwise go unnoticed in real interactions.

Figure. 1. 

Pre–Post Improvement Across Learning Dimensions. This figure presents side-by-side comparisons of average pre-survey and post-survey ratings across five learning dimensions aligned with learning engineering principles. Ratings were collected using a 1–5 Likert scale (1 = strongly disagree, 5 = strongly agree). Pre-survey bars represent baseline participant perceptions prior to engagement with the AI-mediated role-play, while post-survey bars represent perceptions following the intervention. Across all dimensions, post-survey ratings were higher than pre-survey ratings, indicating consistent perceived improvement following engagement with the learning system.

Pre–Post Improvement Across Learning Dimensions. This figure presents side-by-side comparisons of average pre-survey and post-survey ratings across five learning dimensions aligned with learning engineering principles. Ratings were collected using a 1–5 Likert scale (1 = strongly disagree, 5 = strongly agree). Pre-survey bars represent baseline participant perceptions prior to engagement with the AI-mediated role-play, while post-survey bars represent perceptions following the intervention. Across all dimensions, post-survey ratings were higher than pre-survey ratings, indicating consistent perceived improvement following engagement with the learning system.

Discussion and Implications

Interpreted through a learning engineering lens, these findings suggest that AI-mediated role-play can function as an adaptive, human-centered learning system for interpersonal skill development. The pre–post changes suggest that reflective simulation can meaningfully support learning even for experienced professionals, particularly when psychological safety, learner agency, and structured reflection are intentionally built into the experience.
                The nested learning engineering approach enabled alignment between design intent, implementation, and evaluation. Pre–post survey data provided actionable evidence to guide refinement, while qualitative insights illuminated learner-system interaction patterns relevant to future iterations. Ethical considerations emphasized by Principled Innovation, including psychological safety, transparency, and learner agency, remain central, particularly when deploying AI in emotionally sensitive learning contexts.
                From a practice perspective, AI-mediated role-play offers a scalable supplement to traditional professional development by providing low-risk opportunities for deliberate practice. Limitations include reliance on self-reported data and a single institutional context, underscoring the need for continued evaluation across settings.

Conclusion and Next Steps

This study demonstrates how learning-engineered AI-mediated role-play can augment professional learning by providing scalable, low-risk opportunities for reflective practice. Final pre–post findings suggest that the Difficult Conversations Bot supported increased confidence, intentionality, and reflective awareness in navigating difficult conversations.
                Looking ahead, future work will replicate and extend this evaluation with larger and more diverse participant samples across institutional contexts to explore broader applicability and longer-term impact in workplace settings. In parallel, the research team intends to pursue external funding to advance this work toward an avatar-based, multimodal learning system that leverages natural conversational models to increase social presence, emotional expressiveness, and perceived authenticity of interactions. This direction directly addresses learning engineering grand challenges related to scaling complex skill development, designing human-centered AI systems that support judgment and empathy, and creating learning environments that approximate real-world complexity without sacrificing psychological safety. By integrating embodied interaction with principled, data-informed design cycles, future iterations aim to push the boundaries of how learning engineering can support lifelike, ethical, and scalable practice for interpersonal competencies across education and professional domains.

References

  1. Arizona State University. (2022). Principled Innovation. https://pi.education.asu.edu/
  2. Baker, R. S., Boser, U., & Snow, E. L. (2022). Learning engineering: A view on where the field is at, where it’s going, and the research needed. Technology, Mind, and Behavior. https://doi.org/10.1037/tmb0000058
  3. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory.  Prentice-Hall.
  4. Cooperrider, D. L., & Srivastva, S. (1987). Appreciative inquiry in organizational life. Research in Organizational Change and Development, 1, 129-169
  5. Craig, S. D., Avancha, K., Malhotra, P., C., J., Verma, V., Likamwa, R., Gary, K., Spain, R., & Goldberg, B. (2025). Using a Nested Learning Engineering Methodology to Develop a Team Dynamic Measurement Framework for a Virtual Training Environment. In International Consortium for Innovation and Collaboration in Learning Engineering (ICICLE) 2024 Conference Proceedings: Solving for Complexity at Scale (pp. 115-132). https://doi.org/10.59668/2109.21735
  6. Fishbein, M., & Ajzen, I. (2010). Predicting and changing behavior: The reasoned action approach. Psychology Press.
  7. Hiatt, J. M. (2006). ADKAR: A model for change in business, government, and our community. Prosci.
  8. Mezirow, J. (1991). Transformative dimensions of adult learning. Jossey-Bass.
  9. Rogers, C. R. (1959). A theory of therapy, personality, and interpersonal relationships, as developed in the client-centered framework. In S. Koch (Ed.), Psychology: A study of a science (Vol. 3, pp. 184–256). McGraw- Hill.
  10. Totino, L., & Kessler, A. (2024). “Why did we do that?” A systematic approach to tracking decisions in the design and iteration of learning experiences. The Journal of Applied Instructional Design, 13(2). https://doi.org/10.59668/1269.15630