The piece of text below is a shortened, hashed representation of this content. It is useful to ensure the content has not been tampered with, as a single modification would result in a totally different value.
Value:
3be78be18914e4658b96b7d43b28cd897e1bc67885f74fea30bb4208aa37cc8b
Source:
{"body":{"en":"<xml><dl class=\"decidim_awesome-custom_fields\" data-generator=\"decidim_awesome\" data-version=\"0.12.6\">\n<dt name=\"textarea-1772188078816-0\">Team name</dt>\n<dd id=\"textarea-1772188078816-0\" name=\"textarea\"><div>Charbon Kley</div></dd>\n<dt name=\"textarea-1772188112772-0\">Team members (First name, LAST NAME, University)</dt>\n<dd id=\"textarea-1772188112772-0\" name=\"textarea\"><div>Martin BONAN, NEOMA Business School\nAlbert Vallée Duval, NEOMA Business School\nMartin Louvel, NEOMA Business School</div></dd>\n<dt name=\"radio-group-1772188319073-0\">What area does your use case primarily fall under?</dt>\n<dd id=\"radio-group-1772188319073-0\" name=\"radio-group\"><div alt=\"training\">Training / education / pedagogy</div></dd>\n<dt name=\"textarea-1772792126695-0\">The AI use case you are working on</dt>\n<dd id=\"textarea-1772792126695-0\" name=\"textarea\"><div>Across our campus, every student uses AI daily for coursework — ChatGPT, Claude, Perplexity, Cursor, NotebookLM. Yet our institutions treat this reality with a mix of denial, blanket bans, and blanket permissions. The gap is not technical. It is epistemic: universities do not actually know how students use AI in their learning, what works, what doesn't, and what new skills are emerging.\nThe use case we focus on is the institutional blindness to real student AI practice — and the absence of any infrastructure for universities to systematically capture, analyze, and learn from how students actually integrate AI into their studies.\nToday, when a faculty wants to understand AI usage, they have three options: (1) anonymous surveys with multiple-choice questions that flatten reality into percentages, (2) one-off focus groups that don't scale, (3) academic integrity reports that only catch the failures. None of these capture the qualitative texture of how a student actually thinks with an AI, where it helps them learn, where it short-circuits their thinking, and where new hybrid practices emerge.\nThis blindness has concrete consequences: pedagogical decisions made in the dark, anti-AI policies that don't match reality, missed opportunities to teach the AI-native skills students will need, and a growing disconnect between what's officially taught and what's actually practiced.</div></dd>\n<dt name=\"textarea-1772792488518-0\">Why this use case matters</dt>\n<dd id=\"textarea-1772792488518-0\" name=\"textarea\"><div>Three reasons, in order of importance.\n1. Decisions made in the dark are bad decisions. Universities are right now writing AI policies, redesigning assessments, rewriting curricula. They are doing this with almost zero qualitative data on what students actually do. The result is policies that either over-restrict (banning tools students will use anyway in their careers) or under-prepare (letting students develop bad habits that will hurt them professionally). Better data → better decisions.\n2. The skills gap is widening, silently. Some students are developing extraordinary AI-native workflows — using Claude as a thinking partner, building custom prompts, chaining tools. Others are using AI as a copy-paste shortcut that erodes their reasoning. The gap between these two populations is now bigger than the gap between top and bottom students on traditional metrics. Universities don't see this gap because they don't measure it. By the time it shows up in the job market, it's too late.\n3. Students are the experts on student AI practice — but no one asks them properly. A multiple-choice survey treats students as data points. A real qualitative inquiry treats them as informants. Capturing the lived experience of AI-using students is the only way to build pedagogy that meets them where they are.</div></dd>\n<dt name=\"textarea-1772792380575-0\">Your team's motivation and learning objectives</dt>\n<dd id=\"textarea-1772792380575-0\" name=\"textarea\"><div>Our team brings together a builder (Martin B), a creative (Albert), and a marketer (Martin L). We are all heavy AI users in our own studies and we have all noticed the same thing: the most important conversations about AI on our campus happen in private — between students, in DMs, after class — never in the formal channels the institution can hear.\nWe want to use this challenge to:\n\nInvestigate the gap between official institutional discourse on AI and actual student practice, using a mix of methods (interviews, observation, document analysis).\nPrototype an answer — a lightweight, ethical, opt-in infrastructure for universities to systematically gather qualitative student feedback on AI practice without it becoming surveillance.\nLearn how to do real qualitative research — not surveys, not focus groups, but the hard, structured work of coding and analyzing open-ended student voices at scale.\nTest our thinking against critique — from peers, mentors, and the public — and update our proposal accordingly.\n\nWhat we want to leave with: a contribution that an actual university could pilot in September 2026.</div></dd>\n<dt name=\"textarea-1772792857176-0\">Your initial contribution</dt>\n<dd id=\"textarea-1772792857176-0\" name=\"textarea\"><div>The problem, sharpened. This is not an AI problem. It is a feedback infrastructure problem, made urgent by AI. Institutions can't see real practice → policies disconnect from reality → students ignore official channels → institutions stay blind.\nOur proposal: an opt-in qualitative feedback infrastructure for universities, built on three principles:\n\nConversation, not questionnaire. Students are invited into structured open-ended exchanges about specific moments of AI use. LLMs code the resulting text at scale, making cohort-level qualitative research possible for the first time.\nStudent-owned, institution-readable. Students control identification level (anonymous, pseudonymous, identified). Institutions receive aggregated coded insight, never raw individual data.\nLiving, not snapshot. A continuous lightweight stream, not a once-a-year mega-survey. AI practice changes monthly — the instrument must match.\n\nConcrete pilot. One semester, one program (e.g. one NEOMA cohort), 30-50 opt-in students. Monthly 15-min structured conversation via web interface, plus open contribution anytime. LLM-assisted thematic coding with human validation. Monthly synthesis shared both to the institution and back to students.\nImplementation conditions. Four non-negotiables:\n\nA faculty champion who commits to acting on findings.\nStudent-led governance over what gets asked and who sees what.\nA binding non-use clause — data excluded from any individual evaluation, integrity proceeding, or admissions decision.\nPublic synthesis — institution commits to publishing aggregated findings.\n\nWhat we want feedback on: trust and consent design, institutional integration (how to ensure data drives change), student incentives, cross-institutional learning without compromising confidentiality.</div></dd>\n</dl></xml>"},"title":{"en":"The Silent Curriculum: Why Universities Don't Actually Know How Students Use AI, And What to Do About It"}}
This fingerprint is calculated using a SHA256 hashing algorithm. In order to replicate it yourself, you can use an MD5 calculator online and copy-paste the source data.
Share