The piece of text below is a shortened, hashed representation of this content. It is useful to ensure the content has not been tampered with, as a single modification would result in a totally different value.
Value:
9ad775fbaec802d228502fdaa5d07904b8d8ded52f857981b5dac546b6eb63bc
Source:
{"body":{"fr":"<xml><dl class=\"decidim_awesome-custom_fields\" data-generator=\"decidim_awesome\" data-version=\"0.12.6\">\n<dt name=\"textarea-1772188078816-0\">Team name</dt>\n<dd id=\"textarea-1772188078816-0\" name=\"textarea\"><div>LinguaGap</div></dd>\n<dt name=\"textarea-1772188112772-0\">Team members (First name, LAST NAME, University)</dt>\n<dd id=\"textarea-1772188112772-0\" name=\"textarea\"><div>Roberta Cuomo, Zuka Sakhiashvili, Kiyas Mahmud, Yesmine Ben Alaya</div></dd>\n<dt name=\"radio-group-1772188319073-0\">What area does your use case primarily fall under?</dt>\n<dd id=\"radio-group-1772188319073-0\" name=\"radio-group\"><div alt=\"training\">Training / education / pedagogy</div></dd>\n<dt name=\"textarea-1772792126695-0\">The AI use case you are working on</dt>\n<dd id=\"textarea-1772792126695-0\" name=\"textarea\"><div>International students in higher education increasingly rely on AI tools (ChatGPT, DeepL, Gemini) for academic writing, comprehension, and communication tasks. However, these tools perform significantly better in English than in other languages. We want to examine how non-Anglophone students use these tools, and whether this gap creates a hidden academic disadvantage.</div></dd>\n<dt name=\"textarea-1772792488518-0\">Why this use case matters</dt>\n<dd id=\"textarea-1772792488518-0\" name=\"textarea\"><div>AI writing and translation tools are trained predominantly on English-language data, making them structurally less reliable for students working in Arabic, French, Portuguese, or other languages. This creates an invisible inequity: students who are already navigating a foreign academic system receive lower-quality AI assistance than their Anglophone peers. The consequences ripple across learning outcomes, cognitive development, and academic confidence. Beyond individual impact, this gap risks deepening existing inequalities in international higher education at a moment when AI adoption is accelerating. If policies treat AI tools as neutral and universally accessible, they will inadvertently entrench linguistic hierarchies. This use case raises critical questions about inclusion, epistemic fairness, and what it means to design AI \"for everyone.\"</div></dd>\n<dt name=\"textarea-1772792380575-0\">Your team's motivation and learning objectives</dt>\n<dd id=\"textarea-1772792380575-0\" name=\"textarea\"><div>Our team wants to move beyond anecdotal observations about AI and actually measure and document this linguistic gap in a rigorous, field-grounded way. We hope to better understand how intercultural and linguistic diversity is (or isn't) accounted for in AI design and policy. We also want to challenge ourselves to produce work that has real policy relevance, not just academic value. Each of us brings a different lens: linguistic and cultural analysis, technical auditing, economic modelling of access inequalities, and fieldwork design. Participating in this Challenge is an opportunity to combine these perspectives into something greater than any of us could produce alone.</div></dd>\n<dt name=\"textarea-1772792857176-0\">Your initial contribution</dt>\n<dd id=\"textarea-1772792857176-0\" name=\"textarea\"><div>LinguaGap will develop a multilingual, student‑centered audit of AI writing and translation tools to uncover how English‑centric AI systems create hidden disadvantages for international students in higher education. Our solution combines technical benchmarking, field research, and policy analysis to measure and address linguistic inequities in AI‑supported learning.\n\nWe will build a multilingual performance benchmark evaluating major AI tools (ChatGPT, DeepL, Gemini, etc.) across academic tasks—writing, translation, comprehension, and communication—in languages such as Arabic, French, Portuguese, Georgian, and Italian. Using evaluation criteria from translation studies and computational linguistics, we will quantify performance gaps between English and other languages.\n\nIn parallel, we will conduct surveys, interviews, and think‑aloud sessions with international students to understand how they use AI, how language affects tool reliability, and how these differences influence learning outcomes, confidence, and academic integration.\n\nBy combining these findings, we will create a Linguistic Equity Index that captures the severity and educational impact of AI language disparities. This index will support universities, policymakers, and AI developers in designing more inclusive AI guidelines and transparency standards.\n\nOur interdisciplinary team—bringing expertise in translation and intercultural communication, AI, economics, and computer science—aims to produce evidence that can directly inform equitable AI policy in international higher education.</div></dd>\n</dl></xml>"},"title":{"fr":"Measuring the AI Language Gap in International Higher Education"}}
This fingerprint is calculated using a SHA256 hashing algorithm. In order to replicate it yourself, you can use an MD5 calculator online and copy-paste the source data.
Share