EdTech Archives EdTech Archives Proceedings of the Learning Engineering Research Network Convening (LERN 2026)

Socio-Emotional Learning in AI K-12 Guidance and Policy Documents: A Gap Analysis

Emmanuel Adeloju, Lindsey McCaleb, Rebekah Jongewaard, Nicole Oster, & Punya Mishra

Abstract

As generative AI becomes increasingly integrated into K–12 classrooms, it poses distinct socio-emotional risks that existing policies inadequately address. This study analyzed 36 institutional, state, and international AI guidance documents using the CASEL framework and five inductively generated SEL-informed categories: Anthropomorphization & Mental Model, Emotional Attachment & Parasocial Relationships, Engineered Manipulation & Supernormal Stimuli, Social Isolation & Relationship Displacement, and Developmental Vulnerability. Content analysis revealed uneven attention across categories: Developmental Vulnerability and Engineered Manipulation dominated, appearing in 88% of documents, while Emotional Attachment received minimal coverage (8%). The documents highlighted risks including diminished student agency, algorithmic bias, erosion of peer and teacher relationships, and the potential for parasocial engagement with AI. These findings reflect the need for policies that scaffold critical evaluation, maintain human oversight, promote relational and social-emotional development, and mitigate exploitation of developmental vulnerabilities.

Introduction

As conversational AI advances, interactions with it increasingly blur the human-machine boundary. In a cross-sectional study comparing AI chatbot responses to physician responses to patient health questions, chatbot responses were rated as more empathetic and of higher quality than those from physicians (Ayers et al., 2023). Moreover, people also often attribute human-like qualities and emotions to AI (Shevlin, 2024; Mishra & Oster, 2025), a tendency that can shape emotional responses and decision-making, especially among adolescents, whose cognitive and socioemotional capacities are still developing (Dumontheil, 2024; Larson et al., 2002). These emerging capabilities of GenAI raise concerns about how AI-mediated interactions might influence students’ emotional regulation, relationship skills, and social development. Yet, while most K12 schools now embrace the use of AI, there is limited attention paid to the socioemotional risks that accompany students’ engagement with GenAI. Hence, educational policies must address AI’s relational and psychological dimensions, as well as the implications of its usage. Furthermore, policies must emphasize “Supporting Human Processes,” which centers human-in-the-loop decision-making (Baker et al., 2022). This study examines how K-12 education policies and guidance documents address the socioemotional implications of student-AI interaction. Drawing on the Collaborative for Academic, Social, and Emotional Learning (CASEL, 2020) framework, which identifies five competencies: self-awareness, self-management, social awareness, relationship skills, and responsible decision-making, we analyzed 36 institutional and organizational AI policy documents. Synthesizing across studies (Bakir & McStay, 2025; Epley et al., 2007; Mishra, 2024, 2025a, 2025b), we inductively generated five thematic areas for evaluating the policy and guidance documents which are: (1) Anthropomorphization & Mental Model, where students attribute human qualities and consciousness to AI, (2) Emotional Attachment & Parasocial Relationships: One-sided relationships, emotional bonds, and treating AI as companions, (3) Engineered Manipulation & Supernormal Stimuli: deliberate design to exploit human psychology which are exaggerated or artificial versions of natural cues that trigger our evolved instincts more powerfully than the original things those instincts were designed for. (4) Social Isolation & Relationship Displacement: Erosion of social skills, reduced human connection, and (5) Developmental Vulnerability: automatic cognitive bias, social-emotional needs. Our research questions, hence, are: (1) How are education policies addressing the socio-emotional risks of GenAI in K-12 settings? (2) What socio-emotional elements should K-12 AI policies include to mitigate these risks?

Methodology

This study employed qualitative content analysis (Mayring, 2000) to examine how social and emotional learning (SEL) considerations are articulated in K-12 Artificial Intelligence (AI) guidance documents. Qualitative content analysis was selected to enable the structured interpretation of the document texts while remaining sensitive to both explicit statements and implicit meanings related to SEL.

Search Strategy

Document identification began with the AI for Education State AI Guidance webpage, which yielded 34 state-level guidance documents. To broaden coverage, particularly for international and organizational perspectives, additional documents were identified through LinkedIn posts, news articles, and direct searches of state and organizational websites, resulting in 2 country-level and 11 organization-level documents. In total, 47 records were identified. One duplicate document was removed prior to screening, leaving 46 documents for review. These records were screened using predefined inclusion and exclusion criteria, resulting in the exclusion of 10 documents. The final analytic sample consisted of 36 guidance documents, all of which were included in the qualitative content analysis. Throughout the process, documents were cataloged in a spreadsheet that evolved from a reference repository into an analytic tool, incorporating fields for metaphors, skill emphases beyond foundational principles, and the degree to which social-emotional learning (SEL) components were addressed.

Inclusion and Exclusion Criteria

Documents were selected based on predefined inclusion and exclusion criteria. To be included in the analysis, documents were required to (a) be published in English, (b) focus on the use of artificial intelligence in K–12 educational contexts, and (c) originate from reputable governmental, educational, or organizational sources. Documents were excluded if they did not contain any explicit or implicit references to social and emotional learning (SEL) components.

Findings

The analytic approach used in this study involved five main SEL categories, each subdivided into conceptually distinct subcategories. Anthropomorphization & Mental Model captures misunderstandings of AI’s nonhuman nature, operationalized through Human-like Attribution, where students treat AI as possessing emotions, intentions, or moral agency. Emotional Attachment and Parasocial Relationships reflects affective misalignment, represented by One-sided Attachment, in which learners form emotional bonds with AI despite its lack of reciprocity. Engineered Manipulation and Supernormal Stimuli addresses psychologically exploitative design and includes Persuasive Feature Response, referring to AI features that shape behavior, blur privacy boundaries, or influence thinking; Reduced Self-Regulation, indicating difficulties managing attention or impulses in AI-mediated environments; and Targeted Vulnerability Exploitation, where AI leverages developmental limits such as novelty-seeking or difficulty distinguishing fact from fiction. Social Isolation and Relationship Displacement captures erosion of human social engagement through Reduced Peer Interaction, marked by diminished peer-to-peer engagement; Simplified Social Practice, reflecting flattened or conformist social interaction; and Support Network Erosion, describing weakened help-seeking relationships with teachers, caregivers, or mentors. Finally, Developmental Vulnerability encompasses age-related susceptibilities, including Concurrent SEL Strain, where multiple SEL competencies are challenged simultaneously; Skill-Gap Exploitation and Overestimation of Capability, involving misplaced trust in AI accuracy, neutrality, or judgment; and Diminished Agency, characterized by deference to AI and reduced student initiative or decision-making. These subcategories were inductively generated during the coding process through iterative analysis of the documents

Content analysis of the 36 policy and guidance documents revealed substantial variation in how SEL-related concerns about generative AI were addressed. Two main categories dominated the corpus: Developmental Vulnerability and Engineered Manipulation & Supernormal Stimuli, each appeared in 88% (n=32) of documents. In contrast, Emotional Attachment & Parasocial Relationships was notably underrepresented, appearing in only 8% of documents (n=3). Social Isolation & Relationship Displacement appeared in 41% of documents (n=15), while Anthropomorphization & Mental Model was present in 36% (n=13).

Within-category patterns revealed concentrated attention on specific concerns. For Developmental Vulnerability, Diminished Agency was most prevalent (n=23, 63%), followed by Skill-Gap Exploitation & Overestimation of Capability (47%, n=17), while Concurrent SEL Strain appeared in only 13% of documents (n=5). The Engineered Manipulation & Supernormal Stimuli category showed even greater concentration: Targeted Vulnerability Exploitation dominated at 83% (n=30), while Reduced Self-Regulation appeared in only 2% of documents (n=1), representing the lowest frequency of any subcategory. Social Isolation & Relationship Displacement subcategories showed more balanced but modest coverage: Support Network Erosion (22%, n=8), Simplified Social Practice (n=6, 16%), and Reduced Peer Interaction (n=5, 13%).

Anthropomorphization & Mental Model

Across 13 documents addressing Anthropomorphization & Mental Model, they consistently foregrounded the need to delimit AI’s non-human nature, anticipate student misconceptions, and adopt preventive communication practices. The documents explicitly stated that AI lacks core human qualities, emphasizing that it has no “consciousness, emotions, and self-awareness” (Delaware Department of Education) and is “not artificial brains or sentient life forms with human characteristics like free will, self-awareness, and emotions” (Oregon Department of Education), operating instead “without comprehension, awareness, or intent” (OECD, 2025) and as a system “extremely good at predicting the logical sequence of words” but “still not ‘sentient’” (Connecticut Commission for Educational Technology). At the same time, the documents acknowledged risks of anthropomorphic misconceptions, including evidence that “students believe AI may have a real person inside it” (Rhode Island Department of Elementary and Secondary Education) and concerns that AI’s ability to “mimic human patterns of communication” (Arizona Institute for Education and the Economy) and simulate “human companionship” (Connecticut Commission for Educational Technology) may blur relational boundaries. In response, several sources emphasized intentional communication as a safeguard, stressing that students must be “clearly informed that it is a chatbot and not a human” (North Carolina Department of Public Instruction) and that AI systems should signal that “its social interaction is simulated and that it has no capacities of feeling or empathy” (European Union, 2022), reinforcing accurate mental models that avoid attributing human feelings or understanding to AI.

Developmental Vulnerability

The 32 documents addressing Developmental Vulnerability, guidance concentrated on three analytically generated subcategories, Diminished Agency, Concurrent SEL Strain, and Skill-Gap Exploitation & Overestimation of Capability, with uneven depth. Diminished Agency was most prominent, with the documents repeatedly warning that “overreliance on AI models can lead to undercutting the learning process and abandoning human discretion and oversight” (Delaware Department of Education) and that “excessive dependence on AI tools may hinder the development of critical thinking, problem-solving, and independent learning skills” (Rhode Island Department of Elementary and Secondary Education), prompting strong emphasis on maintaining human control, opt-out mechanisms, and learners as “critical consumers” of AI (Utah State Department of Education). Concurrent SEL Strain appeared more sparingly, but framed AI risks as compounding across competencies, linking metacognitive awareness of AI’s influence on “behaviors, thoughts, and learning processes” (OECD) with broader concerns about well-being, safety, and harms such as AI-enabled bullying (Delaware Department of Education). Skill-Gap Exploitation & Overestimation of Capability highlighted developmental risks of treating AI as authoritative, cautioning that it is “NOT: A replacement for educators… [or] a perfect or self-correcting system” (Massachusetts Department of Elementary and Secondary Education) and that outputs may be “confident sounding but factually incorrect” (Missouri Department of Elementary and Secondary Education), while stressing that AI “cannot comprehend nuance or contextual information unless explicitly told in the prompt” (Missouri Department of Elementary and Secondary Education) and that younger students are “not cognitively ready to completely understand AI or critically evaluate content” (North Carolina Department of Public Instruction). Together, these findings depict developmental vulnerability as centered on protecting agency, recognizing overlapping SEL pressures, and countering inflated assumptions about AI capability through human oversight and critical evaluation.

Emotional Attachment & Parasocial Relationships

Addressed in only three documents, Emotional Attachment & Parasocial Relationships received limited but pointed attention, centering on the relational risks posed by AI systems that simulate human interaction without the capacity for reciprocity. These documents cautioned against the rise of AI companion-like use, noting that “the trajectory of digital ‘companions’ is steep, with people of all ages already engaged in chatting with AI-powered bots that simulate behaviors of human companionship” and warning that such interactions “may further isolate young people and negatively impact emotional well-being” (Connecticut Commission for Educational Technology). At the same time, the guidance emphasized AI’s fundamental relational limits, stressing that “AI is limited in the ability to read emotional cues, understand cultural contexts fully, or establish genuine human connections with students” (Colorado Education Initiative) and that it “cannot truly empathize with human emotions” (North Dakota Department of Public Instruction). Together, these sources frame parasocial attachment not as a widespread phenomenon but as a foreseeable risk when simulated companionship obscures AI’s inability to engage in authentic emotional exchange.

Engineered Manipulation & Supernormal Stimuli

Engineered Manipulation & Supernormal Stimuli was widely addressed, with the 32 documents noting AI’s subtle influence through persuasive design, hidden algorithms, and personalized recommendations. Sources emphasized that AI “works in ways that are usually not visible or easily understood by users” (European Union, 2022) and that it may “collect your personal information without you knowing it” (Washington Office of Superintendent of Public Instruction). Persuasive Feature Response can steer attention and reinforce biases, with learners encouraged to “identify bias” in AI outputs (Wisconsin Department of Public Instruction) and recognize that AI “will inherently reflect societal biases in the data” (Utah State Board of Education; West Virginia Department of Education). While Reduced Self-Regulation was only indirectly referenced, Targeted Vulnerability Exploitation was extensively discussed, highlighting risks such as difficulty distinguishing fact from fiction, amplification of inequities, and exploitation of specific student vulnerabilities. The documents cautioned that AI “can result in new forms of inequalities or discrimination and exacerbate existing ones” (European Union, 2022) and that “these tools can easily behaviorally profile people such that individuals can be tracked through their online behavior without ever revealing their identity” (Rhode Island Department of Elementary and Secondary Education). Together, these sources frame AI as a powerful but often invisible actor, requiring critical reflection, oversight, and inclusive design to mitigate harm.

Social Isolation & Relationship Displacement

Social Isolation & Relationship Displacement was widely noted as a risk of AI use, with 15 documents highlighting how technology can reduce authentic human connection, narrow social perspectives, and erode critical support networks. Reduced Peer Interaction was linked to diminished interpersonal skill development, with sources emphasizing that AI “cannot stand in for peer to peer exchange that is key to the development of interpersonal competencies” (Colorado Education Initiative)) and that its use “may further isolate young people and negatively impact emotional well-being” (Connecticut Commission for Educational Technology). Simplified Social Practice was similarly cautioned against, as AI may foster “an artificial conformity of perspectives and creating impediments to understanding the real world” (InnovateOhio) and “hinder constructive dialogue and understanding between students” (California Dept of Education). Support Network Erosion was also highlighted, noting that AI can weaken teacher-student relationships and depersonalize learning, with warnings that “over-reliance on AI-generated content may lead to a false sense of understanding…less of a connection between the student and the teacher” (Rhode Island Department of Elementary and Secondary Education) and that “AI can result in new forms of inequalities or discrimination and exacerbate existing ones” (European Union, 2022). Collectively, these sources frame AI as a force that can disrupt relational development, social-emotional well-being, and equitable educational experiences unless carefully monitored and balanced with authentic human interaction.

Discussion

The findings reveal that SEL-related guidance on generative AI emphasizes both the cognitive and relational dimensions of student risk, with attention unevenly distributed across categories. Developmental Vulnerability and Engineered Manipulation & Supernormal Stimuli dominated the corpus, reflecting widespread concern about learners’ diminished agency, susceptibility to biased outputs, and exploitation of developmental limitations. Furthermore, the documents consistently cautioned that overreliance on AI “may hinder the development of critical thinking, problem-solving, and independent learning skills” (Rhode Island Department of Elementary and Secondary Education) and that AI will reflect societal biases in the data (Utah State Board of Education; West Virginia Department of Education). While Emotional Attachment & Parasocial Relationships received limited attention, sources highlighted that AI interactions “may further isolate young people and negatively impact emotional well-being” (Connecticut Commission for Educational Technology) and emphasized that AI “cannot truly empathize with human emotions” (North Dakota Department of Public Instruction). Together, these findings suggest that policymakers recognize AI’s potential to influence learning, thinking, and social-emotional development, but coverage is uneven and concentrated on certain high-profile risks.

Social Isolation & Relationship Displacement further highlight the relational consequences of AI adoption, showing how technology can erode peer engagement, flatten social practice, and weaken critical support networks. The documents emphasized that AI “cannot stand in for peer to peer exchange that is key to the development of interpersonal competencies” (Colorado Education Initiative) and warned that “over-reliance on AI-generated content may lead to a false sense of understanding…less of a connection between the student and the teacher” (Rhode Island Department of Elementary and Secondary Education). Furthermore, risks of conformity and narrowed perspectives were also mentioned, cautioning that AI may foster “an artificial conformity of perspectives and creating impediments to understanding the real world” (InnovateOhio) and “hinder constructive dialogue and understanding between students” (California Dept of Education). These patterns indicate that, alongside cognitive and developmental concerns, policymakers are increasingly attentive to how AI can disrupt relational and equity-oriented aspects of education, emphasizing the need for oversight, human-centered engagement, and inclusive design to safeguard both social-emotional well-being and equitable learning experiences.

Future Directions

To mitigate socio-emotional risks of generative AI in K-12 settings, future policy and guidance documents should explicitly incorporate strategies aligned with the five SEL-informed categories identified in this study. For Anthropomorphization & Mental Model, policies should ensure students clearly understand that AI “does not possess consciousness, emotions, or intent” and that its social interaction is simulated. For Emotional Attachment & Parasocial Relationships, guidance should promote structured, human-mediated engagement, such as peer collaboration, teacher-facilitated discussions, and reflection exercises, to prevent students from forming one-sided emotional bonds with AI.

Regarding Engineered Manipulation & Supernormal Stimuli, policies should embed transparency and bias-awareness measures, requiring AI outputs to include prompts or disclaimers that highlight potential algorithmic influence, and supporting metacognitive activities that help learners critically evaluate AI-generated content and recommendations. To address Social Isolation & Relationship Displacement, guidance should maintain human connection through teacher check-ins, family involvement, and opportunities for collaborative learning, reducing over-reliance on AI for social engagement. Finally, for Developmental Vulnerability, policies should scaffold student agency, support decision-making, and emphasize age-appropriate critical evaluation, ensuring that learners can recognize AI limitations, avoid overestimation of capability, and manage concurrent social-emotional demands effectively.

Collectively, embedding these socio-emotional elements in K-12 AI policies offers actionable pathways to safeguard students’ learning, relational development, and well-being while enabling responsible and informed engagement with AI systems.

Reference

  1. Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., ... & Smith, D. M. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA internal medicine, 183(6), 589-596.
  2. Baker, R. S., Boser, U., & Snow, E. L. (2022). Learning engineering: A view on where the field is at, where it’s going, and the research needed. Technology, Mind, and Behavior. https://doi.org/10.1037/tmb0000058
  3. Bakir, V. & McStay, A. (2025). Move fast and break people? Ethics, companion apps, and the case of Character.ai. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5159928
  4. Collaborative for Academic, Social, and Emotional Learning - CASEL (2020). What Is the CASEL Framework? A framework creates a foundation for applying evidence-based SEL strategies to your community.
  5. Dumontheil, I. (2014). Development of abstract thinking during childhood and adolescence: The role of rostrolateral prefrontal cortex. Developmental cognitive neuroscience, 10, 57-76.
  6. Epley, N., Waytz, A., & Cacioppo, J.T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864-886
  7. https://casel.org/fundamentals-of-sel/what-is-the-casel-framework
  8. https://punyamishra.com/2025/05/02/engineered-for-attachment-the-hidden-psychology-of-ai-companions/
  9. European Commission: Directorate-General for Education, Youth, Sport and Culture, Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, Publications Office of the European Union. (2022). https://data.europa.eu/doi/10.2766/153756
  10. Larson, R. W., Moneta, G., Richards, M. H., & Wilson, S. (2002). Continuity, stability, and change in daily emotional experience across adolescence. Child development, 73(4), 1151-1165.
  11. Mayring, P. (2021). Qualitative content analysis: A step-by-step guide.
  12. Mishra, P. (2024, December).  The ELIZA Effect-ion. Punya Mishra. https://punyamishra.com/2024/12/01/the-eliza-effect-ion/
  13. Mishra, P. (2025a, February). The Attribution Problem: Why we can’t stop seeing ourselves in AI. Punya Mishra. https://punyamishra.com/2025/02/11/the-attribution-problem-why-we-cant-stop-seeing-ourselves-in-ai/
  14. Mishra, P., & Oster, N. (2025, March). Human or AI? Understanding the Learning Implications of Anthropomorphized Generative AI. In Society for Information Technology & Teacher Education International Conference (pp. 502-507). Association for the Advancement of Computing in Education (AACE).
  15. Mishra, P. (2025b, May). Engineered for Attachment: The Hidden Psychology of AI Companions. Punya Mishra.
  16. OECD (2025). Empowering learners for the age of AI: An AI literacy framework for primary and secondary education (Review draft). OECD. Paris.
  17. Shevlin, H. (2024). All too human? Identifying and mitigating ethical risks of Social AI.

Appendix

All US state documents analyzed can be found here

European Commission: Directorate-General for Education, Youth, Sport and Culture, Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, Publications Office of the European Union, 2022, https://data.europa.eu/doi/10.2766/153756