Skip to main content

Cookie settings

We use cookies to ensure the basic functionalities of the website and to enhance your online experience. You can configure and accept the use of the cookies, and modify your consent options, at any time.

Essential

Preferences

Analytics and statistics

Marketing

How should higher education institutions structurally evolve their teaching methods and assessment practices in response to AI ?

Avatar: Amadou Gaye Amadou Gaye

Team name
ARIA – AI Reinforcing Intellectual Abilities
Team members (First name, LAST NAME, University)
Tayeb Farouk BOUCHIKHI, Amadou GAYE, Nicolas LEPOTIER, Fadwa MERZAK, Jinane ZRHALLA (INSA Lyon)
What area does your use case primarily fall under?
Training / education / pedagogy
The AI use case you are working on
The current situation students using AI to meet the expectations of institutions that have not adapted is a symptom, not the problem. The real subject is structural: higher education must rethink what it teaches, how it teaches, and what it assesses. This is not primarily a question of regulating student behaviour; it is a question of institutional transformation. We examine how the arrival of capable AI systems creates a structural mismatch at the heart of higher education. Universities were designed around a set of assumptions that no longer hold: that knowledge is scarce and must be transmitted by experts; that individual cognitive effort is the primary production unit of learning; and that written, timed, individual examination reliably certifies competence. AI challenges all three simultaneously. Knowledge access is no longer scarce. Cognitive effort can be offloaded to a machine at any moment. And written output can be generated without the reasoning process that output was meant to evidence. The student who uses AI to produce work that meets institutional expectations is not the anomaly the institution that has not updated its model is.
Why this use case matters
If higher education does not evolve structurally, two failure modes become likely. The first is certification inflation: degrees increasingly certify the ability to navigate AI tools, not the underlying competencies they are supposed to represent eroding trust from employers and society. The second is pedagogical irrelevance: institutions double down on formats AI renders obsolete (standardised essays, recall-based exams), while the skills genuinely valued in professional and civic life go untrained. Beyond the immediate horizon, the structural question becomes even sharper. If an AI tutoring system can personalise learning, track cognitive progress, scaffold reasoning, and adapt in real time to each student the question is no longer how to integrate AI into existing pedagogy, but what the irreplaceable value of a human institution is in that context. This is not a hypothetical: it is a design challenge that must be addressed now, while the institutional forms are still malleable.
Your team's motivation and learning objectives
We aim to produce a diagnosis and a forward-looking proposal not a list of recommendations for students, but a structural analysis of what higher education must become. Our reference frame is deliberately long-horizon: we ask what a well-functioning higher education system should look like in 2040, when AI tutoring may be highly capable and widely accessible, and we work backwards to identify the structural transformations that need to begin today.
Your initial contribution
1. What is the situation or context you are examining ? Higher education institutions are currently operating with a pedagogical model designed for the pre-digital era and only partially updated for the internet age. This model rests on three structural pillars : knowledge transmission (the lecturer as expert source), individual cognitive effort (the student as sole producer of academic work), and standardised certification (timed exams and written assignments as proxies for competence). AI disrupts all three simultaneously, and the disruption is not marginal it is architectural. Students who integrate AI deeply into their workflow are not misusing the system ; they are revealing that the system's assumptions were never as robust as institutions believed. The pressure point is not the student's behaviour ; it is the institution's unexamined model. This structural lag has concrete consequences that are already visible : a growing gap between what institutions certify and what employers actually observe ; increasing incoherence in assessment policies across departments and institutions ; and a generation of students developing sophisticated AI-augmented workflows in their personal and professional lives while being evaluated in institutional contexts that ignore or prohibit these capabilities. 2. What is your critical analysis of this situation ? The dominant institutional response has been regulatory : define AI use policies, add plagiarism detection tools, reinforce supervised examination. This response treats AI as an external threat to an otherwise sound system. We argue the diagnosis is wrong and therefore the response is misdirected. The deeper issue is that higher education has never fully clarified what it is actually trying to produce. Is it transmitting a body of knowledge? Developing a capacity to reason ? Certifying a level of competence for a labour market ? Forming autonomous, critical citizens ? These are different objectives that require different pedagogical structures. AI makes this ambiguity impossible to ignore because it exposes which objectives can be met by a machine and which genuinely require human development processes. A structural analogy : the calculator forced mathematics education to distinguish between computational skill (now delegable) and mathematical reasoning (not delegable). This did not happen quickly or painlessly it required curriculum reform, teacher retraining, and a rethinking of what maths education was for. AI requires the same process, for almost every discipline, at the same time. Additionally, the structural transformation cannot be uniform. Different disciplines have different relationships to AI capability : a law school faces different challenges than an engineering school or a humanities department. Any useful proposal must account for this heterogeneity rather than imposing a single institutional response. 3. What perspectives were debated within your team ? Our internal discussion identified three structurally distinct positions : Position A (Adaptive conservatism) : the core purpose and structure of higher education is sound; what is needed is targeted adaptation of assessment formats (e.g. moving to oral defences, project-based evaluation) while preserving the fundamental model of individual knowledge acquisition and certification. AI is a new constraint, not a reason to redesign the system from scratch. Position B (Structural redesign) : the current model must be fundamentally reconceived. Teaching should shift from knowledge transmission to the development of meta cognitive skills the ability to formulate problems, critically evaluate AI outputs, construct arguments, and navigate uncertainty. Assessment should shift from individual performance snapshots to longitudinal, process-based evaluation. The institution's role changes from knowledge gatekeeper to cognitive development environment. Position C (Long-horizon design) : both previous positions remain reactive. If we project to 2040 when a well-designed AI system might track an individual student's reasoning development over years, identify gaps, scaffold challenges, and provide richer feedback than any human teacher could at scale what does a human institution add that the AI cannot ? The answer likely involves : structured exposure to human disagreement and ethical complexity ; socialisation into professional and civic communities of practice ; accountability structures that AI systems cannot provide. Building the institution of 2040 means starting from this residual value, not from today's constraints. Our arbitration : Position C provides the correct frame, but Positions A and B describe necessary intermediate steps. We do not advocate waiting for 2040 to act. We propose a direction of travel anchored in the long-horizon question, with concrete structural moves that begin now and remain coherent with where the system needs to go. 4. What structural contribution do you propose ? We propose a framework for structural reorientation of higher education around three axes, designed to be coherent across the short, medium, and long term : Axis 1 — Redefine the object of teaching. Shift the primary objective from knowledge transmission to the development of competencies that AI cannot substitute: ill-defined problem formulation, critical evaluation of AI-generated content, ethical reasoning under uncertainty, and the capacity to produce original synthesis from heterogeneous sources. This requires curriculum redesign at discipline level not a uniform policy, but a structured process by which each field identifies what it uniquely develops in students that a capable AI would not. Axis 2 — Redesign assessment as a developmental process, not a performance snapshot. Move away from point-in-time, output-based evaluation toward longitudinal, process-based assessment: portfolios of reasoning traces, oral defence of work produced with AI, collaborative problem-solving under observation. The goal is to make visible the cognitive process, not just its product because the product is increasingly indistinguishable from AI output regardless of its origin. Axis 3 — Build an adaptive governance architecture. Rather than a static framework for public policy which would be obsolete within years we propose a living governance mechanism : interdisciplinary bodies at institutional level that continuously audit the alignment between pedagogical objectives, assessment practices, and AI capability developments. The framework is not a set of rules but a process of structured, recurring institutional self-questioning. Critically, this mechanism should include students, educators, employers, and AI researchers and should be empowered to make binding curriculum and assessment recommendations, not just advisory ones. Implementation conditions : this transformation requires institutions to accept that the problem is structural, not behavioural. It cannot be addressed by AI use policies alone. It requires investment in educator development, curriculum redesign capacity, and the political will to modify certification frameworks that are currently embedded in national accreditation systems. The speed of AI development means that waiting for consensus before acting is itself a choice one that cedes the initiative to the tools rather than retaining it for the institution.
Comment

Confirm

Please log in

The password is too short.

Share