Artificial intelligence is being integrated into higher education at great speed, but its adoption cannot rely solely on technological enthusiasm. The article by González-Fernández, Romero-López, Sgreccia, and Latorre Medina (RIED, 2025) places the debate on the need to build regulatory and ethical frameworks that make it possible to harness AI without compromising rights, educational quality, or institutional trust.
Through a systematic review (PRISMA) supported by bibliometric analysis (VOSviewer), the authors map the state of the art and reveal a field that is still in the process of consolidation, shaped by tensions between innovation, academic integrity, inclusion, privacy, and sustainability.
The value of the study lies in organizing the available evidence into four major categories: ethical challenges and risks, regulatory frameworks, ethical training, and didactic models. In the first category, issues already present on the university agenda emerge, such as plagiarism and academic dishonesty, algorithmic bias, information overload, technological dependence, anxiety, and uncertainties surrounding authorship and assessment.
In the second category, the urgency of specific policies is emphasized: codes of ethics, clear guidelines on permitted and prohibited uses, data protection and transparency in automated systems, ethics committees, and impact assessments for AI-related projects. At the same time, the article stresses that regulation alone is insufficient: without digital and ethical literacy for faculty, students, and leadership teams, rules risk becoming empty formalities or creating a punitive climate that pushes AI use underground.
The practical conclusion is that “trustworthy” AI in the university requires an ecosystem that combines governance, training, and pedagogy. This entails defining principles (equity, accountability, explainability, human oversight, privacy), translating them into procedures (guidelines for citation and attribution of AI use, assessment criteria, audits, and periodic policy reviews), and designing learning experiences that foster critical thinking rather than the mere substitution of tasks.
The debate remains open, as the article itself acknowledges, because technologies evolve faster than institutions. Yet its contribution is clear: if higher education seeks to integrate AI without losing legitimacy, it must do so through explicit rules, shared ethical competencies, and didactic models that keep human beings (and their rights) at the center.
---
How to Cite: González Fernández, M. O., Romero-López, M. A., Sgreccia, N. F., & Latorre Medina, M. J. (2025). Normative framework for ethical and trustworthy AI in higher education: state of the art. RIED-Revista Iberoamericana de Educación a Distancia, 28(2), 181–208. https://doi.org/10.5944/ried.28.2.43511
