EdTech Archives EdTech Archives Proceedings of the Learning Engineering Research Network Convening (LERN 2026)

Using AI to Bridge Technology Gaps in Higher Education

Abstract

Universities rely on enterprise technology platforms to deliver essential services, yet vendor-provided systems often impose rigid constraints that limit institutional responsiveness to pedagogical needs. This paper presents a learning engineering approach to bridging technology gaps through AI-augmented local solutions. Following the learning engineering (LE) process (Goodell et al., 2023), we addressed a structural limitation in our survey platform that prevented longitudinal analysis of student self-assessments. Through human-centered stakeholder analysis (Schatz et al., 2022; Thai et al., 2022), we identified requirements for visualizing individual student growth on Program Learning Objectives (PLOs) over time. We then leveraged large language models to develop a FERPA-compliant, locally-run pipeline that transforms fragmented course-level survey data into longitudinal student trajectories. Our contribution is twofold. First, we demonstrate how institutions can create adaptive layers within existing ecosystems that address context-specific needs and function as tech augmenters instead of passive consumers. Second, we provide practitioners with a framework guiding them through friction identification, feasibility assessment, process decomposition, and implementation criteria including data governance, maintainability, and scalability. Implementation revealed three key affordances: (1) faculty gained visibility into individual learning trajectories previously obscured by course-level aggregates, (2) curricular gaps became evident through cross-course PLO progression mapping, and (3) accreditation documentation improved through evidence-based growth narratives. This work extends learning engineering practice by demonstrating how readily available AI tools can enable local innovation within complex institutional technology environments, offering a replicable approach for institutions facing similar constraints.


Pulkit Goyal

Arizona State University; puhlkit@gmail.com


Problem and Context

Universities depend on vendor-provided systems that often impose rigid, one-size-fits-all constraints to deliver essential student services. When limitations emerge in these platforms, institutions typically wait months or years for updates, leaving critical needs unmet. This challenge is particularly acute in program assessment, where faculty and administrators need longitudinal views of student development but existing survey platforms lack the capability to aggregate individual responses across multiple administrations.

At Arizona State University, we encountered this friction point with our institutionally-supported survey platform (QuestionPro). Faculty used end-of-course self-evaluations to track students' perceived growth on Program Learning Objectives (PLOs) across an academic program. However, neither QuestionPro nor its predecessor (Qualtrics) could display a student's longitudinal progression across all surveys—a fundamental requirement for understanding individual learning trajectories and evaluating curricular effectiveness. This structural limitation prevented evidence-based program improvement and complicated accreditation documentation.

This challenge illustrates a broader learning engineering concern: How can institutions close gaps in their technology ecosystems without waiting for external development cycles while ensuring solutions remain compliant, scalable, and grounded in pedagogical needs?

Innovation and Approach

Rather than accepting these limitations, we adopted a learning engineering framework (Goodell et al., 2022) to systematically address this challenge through an AI-augmented solution. Following the learning engineering process—Challenge, Create, Implement, Investigate—we developed a locally-run data processing pipeline that transformed fragmented survey data into actionable longitudinal insights.

Challenge Phase: Through stakeholder interviews with faculty and administrators, we identified the core need: visualizing individual student growth trajectories across PLOs throughout their program. We conducted a needs analysis following human-centered design principles (Schatz et al., 2022), documenting specific analytical requirements and constraints including FERPA compliance and the need for offline operation.

Create Phase: We leveraged AI tools (specifically, ChatGPT) to develop a custom analytical pipeline with three components: (1) CSVs extracted from QuestionPro, (2) AI-coded preprocessing scripts to merge and normalize surveys, and (3) a search interface enabling faculty to query individual students and generate longitudinal visualizations. We documented each design decision, its evidence base, and anticipated outcomes to maintain transparency and support iteration.

The innovation lies not in replacing vendor platforms but in creating an "adaptive layer" within the existing technology stack. This approach exemplifies learning engineering's human-centered focus: we designed from the perspective of faculty and administrators who needed these insights, not from what the technology vendor prioritized.

Implementation: All processing occurs locally without internet connectivity, ensuring FERPA compliance and institutional data governance standards. The solution operates as a supplementary tool that faculty can use alongside existing survey infrastructure.

Early Insights and Expected Outcomes

Initial deployment revealed three key affordances:

1. Individual Learning Trajectories: Faculty can now visualize how each student's self-assessed competence evolves across courses, revealing patterns previously invisible in course-level aggregates.

2. Curricular Alignment: By mapping PLO progression across the program sequence, we identified courses where students showed minimal growth on relevant outcomes, signaling potential redesign needs.

3. Accreditation Evidence: The compiled longitudinal data creates clear, evidence-based narratives of student development over time, directly supporting program review and accreditation requirements.

Beyond these analytical benefits, the project demonstrates a methodological shift. Rather than positioning instructional designers and faculty as passive consumers of vendor-provided functionality, the learning engineering approach augmented by accessible AI tools enables them to become active technology augmenters. This represents what (Thai et al., 2022) describe as applying learning sciences using "human-centered engineering design methodologies and data-informed decision-making."

Implications for Learning Engineering

This work contributes to learning engineering practice in three ways:

Practical Framework: We developed a lightweight decision framework for identifying when AI can overcome technology stack limitations: (1) Define the friction point in existing tools, (2) Research feasibility through process decomposition, (3) Map the exact procedural steps required, and (4) Use AI to automate the mapped process. This framework helps teams distinguish between solvable gaps and those requiring vendor-level solutions.

Sociotechnical Design: Our approach exemplifies how learning engineering solutions must consider the interplay between technology capabilities and human workflows. By engaging stakeholders through interviews and needs analysis, we ensured the tool fit naturally into existing practices rather than imposing new procedural burdens.

Scalable Local Innovation: By demonstrating how institutions can layer compliant, locally-developed intelligence onto legacy systems, we offer a model for closing capability gaps without waiting for enterprise vendors. This approach is particularly valuable in higher education, where diverse institutional contexts require customization that enterprise solutions cannot provide.

Next Steps: We are instrumenting the tool to collect usage data and plan to investigate its impact on faculty decision-making and program improvement cycles. We seek collaborations with other institutions facing similar technology gaps to test the framework's generalizability.

References

Goodell, J., Kessler, A., & Schatz, S. (2023). Learning engineering at a glance. Army University Press. https://www.armyupress.army.mil/Journals/Journal-of-Military-Learning/Journal-of-Military-Learning-Archives/Conference-Edition-2023-Journal-of-Military-Learning/Engineering-at-a-Glance/

Schatz, S., Thai, K.-P., Craig, S. D., Schoenherr, J. R., Lis, J., & Kolodner, J. (2022). Human-centered design tools. Learning Engineering Toolkit, 279–301. https://doi.org/10.4324/9781003276579-17

Thai, K.-P., Craig, S. D., Goodell, J., Lis, J., Schoenherr, J. R., & Kolodner, J. (2022). Learning engineering is human-centered. Learning Engineering Toolkit, 83–123. https://doi.org/10.4324/9781003276579-7