Skip to main content

Cookie settings

We use cookies to ensure the basic functionalities of the website and to enhance your online experience. You can configure and accept the use of the cookies, and modify your consent options, at any time.

Essential

Preferences

Analytics and statistics

Marketing

AI Confrontation Pedagogy: Teaching African Students to Think Against AI

Avatar: ALiou DRamé ALiou DRamé

Team name
MILIA Senegal
Team members (First name, LAST NAME, University)
ALiou DRAME Mamadou Yassarou DIALLO Ndongo MBATH Nafissatou NIANG Mouhamadou LEYE Ecole Polytechnique de Thiès
What area does your use case primarily fall under?
Training / education / pedagogy
The AI use case you are working on
At Iba Der Thiam Université of thies ( Senegal), students regularly use ChatGPT and Gemini to complete assignments. These tools, trained on overwhelmingly Western data, produce answers that systematically erase African legal traditions, historical frameworks, and local epistemologies without students or lecturers noticing. A law student asking about Senegalese customary land law receives an answer built entirely on French civil law concepts, with no reference to local jurisprudence or the customary tenure systems governing over 60% of rural Senegal. No Senegalese university currently has a policy to address this.
Why this use case matters
This situation concentrates three urgent tensions in one classroom reality. First, academic integrity: AI makes it practically impossible to distinguish student-produced work from generated content using traditional assessment methods. Detection tools fail and bans are unenforceable. Second, long-term cognitive development: students who outsource their reasoning to AI progressively lose the capacity for critical argumentation the core skill higher education exists to build. Third, and specific to our African context: only 0.2% of AI training data originates from Africa. Every time a Senegalese student uses AI uncritically, they risk replacing local knowledge legal traditions, historical memory, community epistemologies with a Western-centric worldview presented as neutral fact. This is not a technical bug. It is a structural form of epistemological displacement that no existing AI policy framework in francophone Africa currently addresses. These three tensions converge in every AI-assisted assignment, making this an urgent, underrepresented, and policy-relevant case for international AI governance.
Your team's motivation and learning objectives
We are students living this contradiction every day. We use AI tools that were not designed with us in mind, inside a university system that offers no guidance on how to use them responsibly or critically. We want to turn this disadvantage into a research contribution. As students from the Global South navigating AI tools built elsewhere, we are in a unique position to document what AI cultural bias actually looks like from the inside not as an abstraction, but as a daily academic experience. Through this Challenge, our team aims to: — Produce the first field-based documentation of AI epistemological bias as experienced by Senegalese university students. — Design a concrete, zero-budget assessment protocol that any African francophone university can adopt. — Formulate a policy recommendation for the Senegalese Ministry of Higher Education (MESRI). We want to demonstrate that students from Senegal are not passive consumers of global AI governance debates. We have something original to contribute and this Challenge is the right space to do it.
Your initial contribution
THE AI CONFRONTATION PEDAGOGY A three-step assessment protocol for African francophone universities THE PROBLEM WE IDENTIFIED Senegalese university students use AI tools daily. These tools were trained on less than 0.2% African data. The result: AI answers are systematically Western-centric, and neither students nor lecturers have been trained to detect this. Existing responses AI bans and detection tools both fail in practice. We identified a double failure: academic integrity is undermined AND African epistemologies are silently displaced. OUR APPROACH: FIELDWORK + PROTOCOL DESIGN Step 1: Field investigation We surveyed 70 students at UGB (Saint-Louis) and conducted a structured experiment: we submitted 5 culturally specific questions (Senegalese customary land law, Wolof pedagogical proverbs, Sahelian Islamic jurisprudence, local public health policy, national history) to ChatGPT, Gemini, and one African-built alternative. Two subject-matter experts independently evaluated the answers against a cultural accuracy rubric. Result: AI answers were not wrong they were systematically incomplete in predictable, learnable ways. Step 2; Protocol design We designed the AI Confrontation Pedagogy, a three-step assessment protocol: 1. GENERATE: The student uses an AI tool to produce a first answer to the assignment question. 2. CONFRONT: The student writes a structured critique of the AI's answer, identifying: factual errors, missing African references, cultural bias, and epistemological assumptions embedded in the response. 3. DEFEND: The student presents their critique and proposes an improved, culturally grounded answer. The key shift: the question is no longer "did you use AI?" it is "can you show you understand what the AI got wrong?" Step 3: Pilot test We ran a 2-hour workshop with 15 volunteer students at UGB using this protocol. Pre/post data: 89% had never received any training on AI limitations before the session. After the session, 76% reported feeling confident challenging AI-generated content. 3 students independently identified the Western legal bias in a ChatGPT answer about Senegalese land law without being prompted. OUR PROPOSAL: THREE POLICY ELEMENTS 1. A national AI assessment standard for Senegal We propose that MESRI (Ministère de l'Enseignement supérieur) adopt a national guideline requiring every university to integrate a critical AI competency into its assessment framework by 2026, using our protocol as the reference model. 2. A teacher training module Module 1: How generative AI works non-technical overview Module 2: Identifying AI-generated content and its cultural limitations Module 3: Designing and evaluating AI confrontation assignments Module 4: AI cultural bias in African educational contexts Format: PDF + audio, Creative Commons license, zero internet required. 3. An AI Literacy Passport for students A 90-minute self-paced module completed before any confrontation assessment. Topics: how AI works, how to identify bias, how to document AI use transparently. Validated by a short quiz. TENSIONS WE EXPLICITLY ADDRESS - Short-term performance vs long-term cognition The protocol forces genuine cognitive engagement: the critique cannot itself be generated by AI. - Inclusion vs digital divide All materials are offline-compatible. The protocol works with any AI tool, including locally hosted lightweight models (Mistral, Phi-3), making it viable across low-bandwidth campuses. - Cultural sovereignty vs global AI access The confrontation task makes AI bias visible and teachable turning AI's structural limitation into a learning object rather than an invisible harm. WHY THIS IS ACTIONABLE Zero additional budget required for universities. No new technology needed: works with free AI tools students already use. Exportable to all francophone African university systems without adaptation. Measurable outcomes: pre/post critical AI literacy scores, educator adoption rate. FROM SENEGALESE STUDENTS TO INTERNATIONAL GOVERNANCE AI was not built with Africa in mind. But African students can decide how to teach it. This contribution is grounded in real fieldwork, tested with real students, and directed at a real ministry. We are not describing a problem we are delivering a solution.
Comment

Confirm

Please log in

The password is too short.

Share