Julio Intriago-Izquierdo1 and Rahul Patel2
1 Teachers College, Columbia University; jei2116@tc.columbia.edu
2 Insights & Innovations LLC; rahul@innovateinsightfully.com
Abstract.
Keywords: Learning Engineering, AI Writing Coach, Personal Narrative, Student Voice, Feedback.
Many students, particularly in large urban systems, struggle to see themselves as writers and to receive the timely, individualized feedback that builds skill and motivation. In New York State, 54% of students fall short of grade-level expectations in ELA (NYSED, 2024). Evidence syntheses underscore feedback’s large effects on learning (Hattie, 2008). In addition, research on adolescent writing instruction emphasizes the importance of explicit strategy support and iterative practice, and scholarship on writing and identity highlights how writing can function as an act of self-positioning and voice (Graham & Perin, 2007; Ivanič, 1998).
Against this backdrop, we developed ConnectInk, a tool designed to expand access to feedback while protecting student voice. ConnectInk, developed by Insights & Innovations with the Center for the Professional Education of Teachers (CPET) at Teachers College, Columbia University, embeds a guardrailed, question-driven AI coach inside a genre-based narrative writing unit. Crucially, the AI never drafts text; it only offers prompts, genre-specific cues, and audience-awareness questions, preserving student authorship.
Guided by this theory of change, this short paper addresses three research questions (RQs): RQ1: How do students perceive the drafting and revision experience when supported by an AI-powered tool that does not generate content for them? RQ2: How does writing a personal narrative, supported by an AI-powered tool, shape students’ self-awareness and sense of identity? RQ3: To what extent do shared narratives influence peer understanding and audience empathy?
Learning Engineering lens. Learning Engineering (LE) emphasizes the systematic, human-centered, and evidence-driven design of learning experiences through iterative cycles that connect learning science, design decisions, and data. In this study, we treat the Spring 2025 pilot as a nested LE cycle inside ConnectInk’s larger product-and-partnership effort: the instructional unit and tool behaviors operationalize theory-informed design decisions, the classroom implementation tests feasibility under real constraints, and the mixed-methods data plan produces actionable evidence to refine both the learning experience and the tool (Baker et al., 2022; Goodell et al., 2023; Thai et al., 2022; Craig et al., 2025).
CPET led a pre/post mixed-methods study approved by the Teachers College IRB (Protocol “ConnectInk: Storytelling & Community,” #25-325; issued 04/15/2025). The intervention consisted of a six-lesson personal narrative unit typically implemented over 4–6 class sessions (not all classes completed all six lessons), integrated with ConnectInk as an in-class writing coach, and facilitated by CPET writing coaches. Across sessions, students followed a consistent instructional arc—brainstorm → draft → revise → share—with ConnectInk used at defined moments to prompt idea generation, support drafting momentum, and guide revision without generating student text.
Three Title I New York City high schools participated: Northside Charter High School (Brooklyn), Leaders High School (Brooklyn), and East Bronx Academy for the Future (Bronx). Two coaches implemented the unit with 354 students.
The intervention combined a genre-based narrative writing sequence with ConnectInk (https://connectink.org), a browser-based, guardrailed AI writing coach. Students used the tool through multiple entry points: (a) low-stakes creative prompts and mentor texts; (b) “Get Unstuck” nudges during drafting; (c) genre-based feedback during revision; (d) audience-aware prompts to support clarity and purpose; and (e) a review-for-publication checklist to prepare final drafts. Guardrails ensured the AI asked questions and offered cues but never produced student text.
Data sources included: (1) an 18‑item Student Perception Survey administered at baseline; (2) post‑lesson student reflections; (3) pre/post narrative writing samples; and (4) coach observation logs. In total, 217 narratives were drafted; 188 pre‑writing samples and 29 post‑project samples were collected for text analysis.
Analysis. Survey items were summarized descriptively. Qualitative data (student reflections, coach logs, and writing samples) were thematically analyzed, and CPET literacy experts reviewed exemplar paired samples to identify changes in narrative craft and structure. From a Learning Engineering perspective, this cycle used an explicit measurement plan spanning perception (survey), process (reflections/logs), and product (writing samples) to connect design intent to observed learner experience and outcomes and to specify iteration targets for the next cycle (Totino & Kessler, 2024).
At baseline, 37% of students reported enjoying writing and 25% felt comfortable sharing work, while 68% valued receiving feedback. AI use was emerging, as 44% had used AI for writing, 36% for feedback/revision, and 36% believed AI could help them improve (24% disagreed). Reflections and coach logs suggest students most often used ConnectInk to get started and revise early craft moves, supporting brainstorming, organization, and revision toward stronger hooks, clearer structure, and added detail. For example, one student wrote, “I used the AI tool to pick my prompt on what to write and also to see if my hook was good.” Coaches similarly noted that students “liked that the tool prompted them and helped them see what their writing did,” and multiple students emphasized they were glad “it didn’t generate things for them.”
Baseline identity indicators were modest, as 37% felt confident in their writing and 35% saw themselves as good writers, though 63% agreed they were learning to write better. Reflections showed increased self-diagnosis (e.g., noticing needs in organization and elaboration) and surprise at one’s capacity to write more than expected. One student captured this shift directly: “I learned that i am not a bad writer when writing about something im interested in.” Coach logs echoed this pattern; in one case, a student was “particularly surprised by the feedback” because it surfaced an interpretation of their narrative they “hadn’t considered,” suggesting the tool may support metacognitive insight about voice and meaning beyond surface-level edits.
Students reported that writing helped them understand themselves (43%) and that reading peers’ work improved understanding (46%) and sometimes changed opinions (43%). Reflections indicate early signs of perspective-taking, including noticing classmates’ values through chosen objects and story details, as well as learning that “People have different points of view” and “How others see things differently than me.” Coaches observed partner sharing marked by “smiling and nodding… and eye contact that indicated focused listening,” while also noting hesitancy to share narratives with the full group, an important constraint on empathy-building via wider audience exposure.
Paired samples showed clear gains from brief, expository‑leaning responses to extended narratives with recognizable arcs, vivid hooks, dialogue, internal monologue, and reflective endings. CPET expert commentary highlighted shifts toward “show, don’t tell,” rhetorical questioning, and elaboration. All final texts were 100% student‑authored due to non‑generative guardrails.
Within a short, feasible arc, combining a guardrailed AI coach with genre-based pedagogy yielded promising movement in confidence, craft, and classroom connection. The findings align with evidence that writing improves when learners receive explicit strategy support, iterative practice, and actionable feedback (Graham & Perin, 2007; Hattie, 2008), and with scholarship emphasizing writing as a site of voice and identity work (Ivanič, 1998). Practically, the approach shifts AI from text generator to dialogue partner, preserving authorship and ethics while helping educators concentrate time on higher-order instruction, conferencing, and community-building in high-need settings (NYSED, 2024).
Limitations include end-of-year timing (many classes completed 4–5 of 6 lessons), partial post-intervention survey completion, and early-stage product maturity. Next steps include implementing earlier in the semester to complete all six lessons, tightening post-lesson reflections to capture clearer evidence of writing decisions (e.g., what prompt was used and what change was made), addressing high-friction usability issues that interrupt drafting/revision, and completing the planned mixed-methods analysis.
In this pilot, student reflections and coach logs provided high-signal evidence about where coaching prompts supported momentum and revision, and where access/workflow barriers constrained uptake, making iteration targets concrete. This work demonstrates a transferable nested Learning Engineering cycle for AI-supported writing instruction, in which teams: (1) translate learning goals and constraints (student voice, authorship, narrative craft) into explicit design requirements (non-generative guardrails; question-driven coaching); (2) embed the tool in a defined instructional sequence with roles for teachers and peers; (3) implement under authentic constraints; (4) instrument for evidence across perception, process, and product; and (5) use findings to specify iteration targets for the next cycle. Framing AI as a bounded dialogue partner, rather than an autocompletion engine, supports ethical authorship while producing actionable design knowledge for scaling voice-centered writing experiences.
We thank Dr. Roberta Lenger Kang, Dr. Cristina Compton, Dr. Kelsey Hammond, and Gregory Petershack of the Center for the Professional Education of Teachers (CPET) at Teachers College, Columbia University, for partnership and expert review; Joseph Martinez (Insights & Innovations) for software development; and the participating teachers and students at Northside Charter High School, Leaders High School, and East Bronx Academy for the Future for their contributions to implementation and data collection. This work was supported by The Wellness Classroom.