With the introduction of Generative AI (GenAI) to the public in Fall 2022, higher education stakeholders shared their perceptions of the potential impact they believed it would have on our society at large. In response, some higher-education institutions initially presented actionable policies and guidelines to address the potential challenges that may be posed by GenAI; a particular focus aimed to determine how the core functions of developing and sharing knowledge through academic writing could be affected. These concerns around academic writing relate to academic integrity (Eke, 2023) and learning loss (McDonald, 2024; Smolansky et al., 2023) that may result from offloading writing-related cognitive tasks to GenAI tools. This study examined the implications of GenAI in higher education from a different perspective: to understand how different stakeholders in the system are affected and how higher education as a system should work together to respond to new demands from the introduction of GenAI. More pervasive cognitive partnerships with GenAI tools are introducing changes to the responsibilities of all members in the higher education community. The study examines how faculty, staff, and students perceive and respond to these changes. According to the Technology Acceptance Model (TAM; Venkatesh et al, 2003), individuals’ reactions to the technology are important considerations that contribute to the intention and actual use of the technology. Most importantly, the study asks, “Who’s responsible and how, in driving change forward with GenAI in the context of higher education?”
The Technology Acceptance Model (TAM2) (Venkatesh et al., 2003) was used as the theoretical framework to guide our understanding of how different stakeholders in higher education responded to the opportunities and challenges of GenAI in their daily practices, identifying important considerations that contribute to the intention and actual use of the technology. In addition to TAM2, we propose that framings related to the “core competencies” should be considered in order to fully understand how higher-education community members are determining how best to work with and learn from GenAI.
The term “competence” has a multifaceted meaning within the context of higher education. While distinctions are made between (1) subject-specific competencies, (2) trans-disciplinary (generic) cognitive competencies, and (3) action-based competencies (Chur, 2011), how they are defined and taught may vary across disciplines and institutions (Chan et al., 2017; Holmes et al., 2021). Such complexity makes it difficult to address competences as one specific set of skills within universities’ policy statements. The new model of competency-based learning has been adopted by universities, but with the introduction of GenAI tools and changing labor markets, there has been discussion about the need for university curriculums to be updated to meet the needs of both graduates and employers. There appears to be a disconnect between what employers expect from graduates, what higher education institutions equip students with, and students’ own expectations and understanding of generic competencies with regards to AI (Chan et al., 2017). This raises the question of whether or not AI literacy is the responsibility of higher education’s subject-matter experts to explain the functionalities and potentials of AI tools in order to enhance the development and application of related core competencies in students.
Our stance regarding GenAI tools is that they should be considered and used as cognitive tools (Jonassen, 1992) to support development and application of knowledge and creativity. As cognitive tools, GenAI can support cognitive partnerships by allowing human partners to offload certain cognitive tasks to the tools, so the human partners can engage with deeper and more meaningful cognitive activity. In this partnership, human partners should understand when and how to use the tools and thus, would have control over the cognitive partnership.
In the Fall 2023 semester, the researchers hosted two rounds of focus groups (Vaughn et al., 1996) with nineteen participants (n=19; 3 instructors, 5 staff members, 11 students) from a graduate school. Each focus group included stakeholders of different roles to share perspectives and experiences. Focus group meetings were video-recorded; also, all participants were asked to record weekly reflection journals, as prompted by the researchers, using multimodal composition (Jewitt, 2008).
Focus-group recordings and reflection journals were independently analyzed by three researchers through thematic analysis (Braun & Clarke, 2012) to identify emerging themes. Participants from staff, students, and faculty shared their “mixed emotions” when working with GenAI tools. However, such mixed emotions reflected more fundamental needs to be addressed in higher education beyond controversies over GenAI tool use. Specifically, these needs are for faculty, students, upper administration, and staff:
to have a shared understanding of changing teaching, learning, and work-production demands, especially in relation to core competencies. While core competencies may not change, how to foster and practice these core competencies may need to change.
to reimagine everyone’s roles in relation to the core competencies and how they are developed and practiced with GenAI tool use.
to recognize the perceived usefulness and ease of use of existing academic and professional resources and support that are provided in higher education.
Our research confirmed extensive and varied GenAI use by higher-education community members, much of which had been previously undocumented. By learning and establishing these potential points of conflict and growth, we ask these questions and present general ways forward:
Where do we go from here? Increased community conversations and transparent, published guidelines must be prioritized, with recognition that the topics, issues, and policies may continue to evolve with the introduction of new GenAI tools. These should include partnership and conversation across different stakeholders to identify and communicate evidence-based guidelines with emphasis on data privacy and bias mitigation, particularly in their support for core competencies.
Who is responsible and how? While there should be freedom to either use or not use GenAI tools in various teaching, learning, and workflows, everyone is responsible for appropriate use of GenAI in relation to the core competencies.
How do we respond to this call on an institutional level? As knowledge builders, higher education institutions have the responsibility to model and lead discussions on the changing practices in response to GenAI.