EdTech Archives EdTech Archives Proceedings of the Learning Engineering Research Network Convening (LERN 2026)

Epistemic Cognition and Uncertainty Navigation with a Domain-Specific AI Chatbot in STEM Education

Yiwen Li, Chengshuai Zhao, Garima Agrawal, Yuli Deng, Jongchan Park, Ying-Chih Chen, & Huan Liu

Abstract

Domain-specific AI chatbots are increasingly embedded in STEM courses, yet little is known about how students judge, interpret, and act on AI-generated feedback during learning. This study examines students’ epistemic cognition when interacting with Cyberbot, a domain-specific AI-powered chatbot integrated into a cloud computing course. Drawing on post-course semi-structured interviews with twelve students, we analyze how learners articulate epistemic aims, evaluate the correctness and reliability of AI feedback, and navigate uncertainty when deciding whether to revise or persist in their problem-solving approaches. Findings indicate that Cyberbot does not eliminate uncertainty for learners; instead, it reorganizes epistemic work by shifting responsibility toward verification, judgment, and decision-making. Students demonstrated calibrated trust, actively cross-checking AI responses with other sources and using the chatbot primarily for epistemic verification rather than open-ended exploration. These results highlight the importance of examining epistemic processes when evaluating AI-supported learning.

Introduction

AI-powered chatbots are increasingly embedded in STEM education to provide explanations, feedback, and on-demand support during problem-solving activities. In response to concerns about the limitations of generic large language models, recent research has emphasized the pedagogical potential of domain-specific AI chatbots that are explicitly aligned with curricular goals and disciplinary content. Empirical studies of such systems have reported positive effects on learning performance, situational interest, and student acceptance in STEM contexts, particularly when chatbots are customized to specific subject domains (Rücker & Becker-Genschow, 2025).

These questions are especially salient in computing-related STEM domains such as cloud computing, where problem solving often involves ambiguous requirements, evolving system constraints, and multiple acceptable solutions. In such contexts, students frequently encounter uncertainty arising from incomplete understanding, conflicting information, or doubts about whether their solutions adequately meet task demands. Rather than viewing uncertainty as a barrier to learning, research has emphasized its role in prompting inquiry, reflection, and sensemaking (Manz, 2015; Berland et al., 2016).

To address this gap, the present study adopts an epistemic cognition perspective to examine how students engage with a domain-specific AI chatbot during moments of uncertainty. Epistemic cognition concerns how learners think about knowledge, evidence, and justification, including how they evaluate the correctness and trustworthiness of information and decide whether to revise or persist in their approaches (Chinn et al., 2011; Greene et al., 2014). This study investigates students’ epistemic cognition during interactions with Cyberbot, a domain-specific AI-powered chatbot embedded in a cloud computing course. Drawing on post-course interviews, we examine how students evaluate AI feedback, navigate uncertainty, and position the chatbot within their broader epistemic ecosystem of peers, instructors, and other resources.

Conceptual Framework

Epistemic cognition refers to how individuals think about knowledge, evidence, and justification, including their epistemic aims, ideals, and processes (Chinn et al., 2011). In learning contexts, epistemic cognition shapes how students determine what counts as a satisfactory explanation, how they evaluate information sources, and how they justify decisions during problem solving. Prior research has emphasized that epistemic cognition is situated and task-dependent, varying across domains and learning environments (Greene et al., 2014).

Uncertainty is a central condition under which epistemic cognition becomes visible. In STEM learning, uncertainty often arises from incomplete knowledge, ambiguous information, or competing explanations, prompting learners to engage in epistemic judgment and sensemaking (Chen et al., 2024). Rather than conceptualizing uncertainty as a negative outcome to be minimized, this study treats uncertainty as a productive condition that activates epistemic processes, such as evaluating evidence, seeking verification, and deciding whether to revise or persist in an approach (Berland et al., 2016).

In AI-supported learning contexts, epistemic cognition is particularly salient because students must determine how to interpret and use AI-generated feedback. Interacting with AI systems requires learners to judge the reliability, sufficiency, and relevance of responses, especially when feedback is incomplete or conflicts with prior knowledge. This framework allows us to examine how students enact epistemic cognition while navigating uncertainty during interactions with a domain-specific AI chatbot.

Context and Methods

This study was conducted in a semester-long cloud computing course at a large public university. As part of the course, students used Cyberbot, a domain-specific AI-powered chatbot designed to provide course-aligned explanations and clarification during project work and exam preparation. Unlike open-domain chatbots, Cyberbot was tailored to the course content and instructional goals, with the intent of supporting learning rather than providing direct solutions.

Post-course semi-structured interviews were conducted with twelve students. Interview questions focused on students’ epistemic aims when using Cyberbot, their expectations for AI feedback, how they evaluated the correctness and reliability of responses, and how they made decisions under uncertainty. Interview data were analyzed thematically to identify recurring patterns in students’ epistemic cognition and uncertainty navigation during interactions with Cyberbot.

Findings

Across interviews, students described using Cyberbot primarily as a tool for epistemic verification rather than open-ended exploration or solution generation. When encountering uncertainty—such as unclear task requirements or doubts about correctness—students turned to the chatbot to assess whether their current approach aligned with course expectations. Importantly, students did not treat Cyberbot as an authoritative source; instead, they demonstrated calibrated trust by actively evaluating the correctness and reliability of AI feedback based on consistency with lectures, course materials, and prior knowledge. When responses were vague or conflicted with other sources, students interpreted these moments as signals to seek verification elsewhere rather than accepting AI feedback uncritically. These practices indicate that interacting with Cyberbot required ongoing epistemic judgment.

Cyberbot also shaped how students navigated uncertainty when deciding whether to revise or persist in their problem-solving approaches. Rather than prompting wholesale strategy changes, AI feedback most often reinforced confidence in existing plans or supported incremental adjustments. Students weighed AI suggestions against task constraints and their own reasoning, highlighting uncertainty navigation as a deliberative epistemic process. At the same time, students consistently positioned Cyberbot as one epistemic resource within a broader learning ecosystem, integrating it with peers, instructors, and documentation. This positioning underscores that learning with a domain-specific AI chatbot involved distributed epistemic work, in which judgment and verification were coordinated across multiple sources rather than delegated to the AI alone.

Discussion

These findings suggest that domain-specific AI chatbots do not eliminate uncertainty for learners but instead reconfigure epistemic responsibility by foregrounding judgment, verification, and decision-making. Rather than passively accepting AI-generated feedback, students engaged in calibrated evaluation, using Cyberbot to assess alignment with task expectations while retaining responsibility for determining correctness and adequacy. Uncertainty functioned as a productive condition that prompted epistemic action, shaping when students chose to revise or persist and how they integrated AI feedback with other sources. Viewed through an epistemic cognition lens, AI-supported learning in this context involved distributed sensemaking across human and non-human resources, highlighting the importance of designing AI tools that support learners’ epistemic judgment rather than providing definitive answers.

Acknowledgments

This research was supported by the National Science Foundation under Grant No. [2335666] and [2404966]. Any opinions, findings, and conclusions or recommendations expressed in this research are those of the authors and do not necessarily reflect the views of the National Science Foundation.

References

  1. Berland, L. K., Schwarz, C. V., Krist, C., Kenyon, L., Lo, A. S., & Reiser, B. J. (2016). Epistemologies in practice: Making scientific practices meaningful for students. Journal of Research in Science Teaching, 53(7), 1082-1112.
  2. Chen, Y. C., Jordan, M., Park, J., & Starrett, E. (2024). Navigating student uncertainty for productive struggle: Establishing the importance for and distinguishing types, sources, and desirability of scientific uncertainties. Science Education, 108(4), 1099-1133.
  3. Chinn, C. A., Buckland, L. A., & Samarapungavan, A. L. A. (2011). Expanding the dimensions of epistemic cognition: Arguments from philosophy and psychology. Educational psychologist, 46(3), 141-167.
  4. Greene, J. A., & Seung, B. Y. (2014). Modeling and measuring epistemic cognition: A qualitative re-investigation. Contemporary Educational Psychology, 39(1), 12-28.
  5. Manz, E. (2015). Representing student argumentation as functionally emergent from scientific activity. Review of Educational Research, 85(4), 553-590.
  6. Rücker, C. R., & Becker-Genschow, S. (2025). Enhancing Enthusiasm for STEM Education with AI: Domain-Specific Chatbot as Personalized Learning Assistant. Computers and Education Open, 100315.