Generative AI in Higher Education: A Systemic Analysis of Pedagogical Adaptation, Ethical Integrity, and Institutional Governance
DOI:
.Keywords:
Generative AI, Higher Education, Academic Integrity, Large Language Models, Algorithmic Bias, Assessment Security.
Abstract
The rapid spread of Generative AI (GenAI) across universities is reshaping how knowledge is produced and evaluated. Large Language Models (LLMs) now participate in tasks once understood as exclusively human—analyzing information, generating ideas, and constructing written work. This development unsettles long-standing assumptions about how students learn and how institutions assess that learning. Drawing from sociotechnical and dataset-focused perspectives, this paper examines the multiple ways AI integrates into educational environments and proposes a structured framework for categorizing student–AI interactions. We also outline a comprehensive institutional protocol for evaluating the impact of GenAI on academic integrity and assessment design. Ultimately, we argue that the challenges posed by GenAI stem not from student misconduct alone but from deeper curricular dependencies on product-based assessment.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.


