The piece of text below is a shortened, hashed representation of this content. It is useful to ensure the content has not been tampered with, as a single modification would result in a totally different value.
Value:
a0a3961a269c5edce1606201de302fd6bc23cf1c1a6821210948b894f966c7f0
Source:
{"body":{"fr":"<xml><dl class=\"decidim_awesome-custom_fields\" data-generator=\"decidim_awesome\" data-version=\"0.12.6\">\n<dt name=\"textarea-1772188078816-0\">Team name</dt>\n<dd id=\"textarea-1772188078816-0\" name=\"textarea\"><div>T-TWICE team</div></dd>\n<dt name=\"textarea-1772188112772-0\">Team members (First name, LAST NAME, University)</dt>\n<dd id=\"textarea-1772188112772-0\" name=\"textarea\"><div>--Candis MEDJOM , Gustave Eiffel University\n--Johanna FOKUI , Aivancity School\n---William KANKEU , Oteria Cyber School</div></dd>\n<dt name=\"radio-group-1772188319073-0\">What area does your use case primarily fall under?</dt>\n<dd id=\"radio-group-1772188319073-0\" name=\"radio-group\"><div alt=\"training\">Training / education / pedagogy</div></dd>\n<dt name=\"textarea-1772792126695-0\">The AI use case you are working on</dt>\n<dd id=\"textarea-1772792126695-0\" name=\"textarea\"><div>Many university math students now use AI to get instant answers to proofs. The effort of working through the reasoning is fading. Our team explores an AI tutor designed to guide rather than answer. It reads studentsโ reasoning, detects where logic breaks, and responds with questions. At the same time, it reveals recurring reasoning errors across a class, helping professors uncover hidden learning.</div></dd>\n<dt name=\"textarea-1772792488518-0\">Why this use case matters</dt>\n<dd id=\"textarea-1772792488518-0\" name=\"textarea\"><div>AI tools are increasingly present in higher education and are changing how students approach learning. In mathematics, where understanding and reasoning are central, instant-answer systems can sometimes reduce opportunities for students to engage deeply with problems. At the same time, professors often face large classes and limited time, making it difficult to identify how students reason and where misunderstandings occur.\nThis use case explores whether AI could instead support learning by encouraging reasoning rather than simply providing solutions, while also helping teachers better understand common difficulties within their classes. However, it also raises important questions about the role of AI in education: how student reasoning data should be used, how to avoid excessive monitoring, and how to balance pedagogical benefits with broader concerns such as fairness, sustainability, and the evolving relationship between students, teachers, and technology.</div></dd>\n<dt name=\"textarea-1772792380575-0\">Your team's motivation and learning objectives</dt>\n<dd id=\"textarea-1772792380575-0\" name=\"textarea\"><div>We are the students this challenge is about. One of us studies AI and builds the\n systems. Another studies mathematics and struggles with the proofs. The third studies cybersecurity and data protection law, the one who asks whether we even should before we build. We have seen all three sides: students misusing ChatGPT for homework, professors overwhelmed by copies they can't analyze, and privacy questions that most ed-tech projects leave for later. We want to explore how AI could help students think mathematically without giving answers, and quickly hit questions we cannot answer alone, about dependency, the line between pedagogical analytics and surveillance, and environmental cost. This challenge is our chance to confront these tensions with experts, professors, and peers. We want to come out with better questions, not just better software.</div></dd>\n<dt name=\"textarea-1772792857176-0\">Your initial contribution</dt>\n<dd id=\"textarea-1772792857176-0\" name=\"textarea\"><div>The idea for this project came from a lived experience. Candis, during her early years studying mathematics at university, relied heavily on AI to understand exercises and courses. The answers seemed clear, and she felt she was progressing. But her exam results told a different story. She had been understanding solutions without building her own reasoning. The turning point came when she started working with a former mathematics student who refused to give answers, instead asking questions, pushing her to write down her thinking even when it felt wrong, and helping her see where her reasoning broke down. It was slow and sometimes frustrating, but it was the first time she truly progressed. Later, as a private tutor herself, she observed the same pattern in her students: they used AI to get solutions fast but struggled to reason on their own.\nโ \nThis experience is where T-Twice (Think - Twice) was born. We are three students from different disciplines, Johanna (AI engineering), Candis (mathematics and actuarial science), and Andelson (data protection law), and we set out to explore whether AI could be redesigned to support mathematical reasoning without replacing the thinking process.\nโ \n\nโ \n ๐ง๐ต๐ฒ ๐๐ถ๐๐๐ฎ๐๐ถ๐ผ๐ป ๐๐ฒ ๐ฎ๐ฟ๐ฒ ๐ฒ๐
๐ฎ๐บ๐ถ๐ป๐ถ๐ป๐ด\nโ \nUniversity mathematics students increasingly rely on generative AI to complete assignments. They get answers instantly and learn nothing. This creates a paradox: the tool designed to help students think can actually prevent them from thinking. Meanwhile, professors grading hundreds of papers can see individual mistakes but cannot detect that many students in their class make the same type of error again and again. These patterns remain invisible until final exams, too late to act.\nโ \nNo widely available tool addresses this gap. ChatGPT gives answers. Formal proof assistants are too complex for undergraduates. Learning analytics platforms track activity but rarely analyze the quality of reasoning. And most raise serious questions about data privacy and compliance with the EU AI Act that remain unanswered.\nโ \n\nโ \n ๐ข๐๐ฟ ๐ฐ๐ฟ๐ถ๐๐ถ๐ฐ๐ฎ๐น ๐ฎ๐ป๐ฎ๐น๐๐๐ถ๐\nโ \nTo ground our analysis in reality, we built a functional prototype and tested it with real students. At every stage, the feedback we received changed our thinking.\nโ \nThe prototype works as follows: the student writes their reasoning, the AI detects the type of error from 13 types of reasoning errors identified by mathematics education researchers (Weber, 2001; Selden and Selden, 2003; Harel and Sowder, 1998), and responds with a guiding question to help the student find the answer themselves. It never gives the answer directly. It includes a system that identifies each student's pattern of mistakes, four levels of help from detailed guidance to full autonomy, GDPR-compliant data management, and carbon footprint tracking per session.\nโ \nBut the prototype is not our contribution. It is our method of investigation. By building and testing, we discovered things that reading research alone could not teach us.\nโ \nJohanna noticed that during testing, some students quickly started sending bare answers without showing their reasoning, waiting for the AI to do the thinking. This led us to redesign the system so that it refuses to validate any answer without explicit justification, even correct ones. It convinced us that any AI tutoring tool must be designed to become less helpful over time, not more.\nโ \nCandis tested the system on problems she knew well and found that while major mathematical errors were rare, the AI occasionally made smaller mistakes, such as misidentifying the precise type of reasoning error. This convinced us that confidence indicators and human oversight are not optional features but ethical requirements for any AI used in education.\nโ \nAndelson, reviewing the system from a legal perspective, raised concerns about how student data was exposed in our first version, which directly shaped the privacy architecture we describe below.\nโ \nBut the most important feedback came from the students themselves. Through early informal testing with a small group of students Candis tutors privately, they said something we had not anticipated: they wanted the AI to match their professor's expectations. They were not just looking for correct guidance, they wanted guidance calibrated to what their specific professor considers important, uses as notation, and expects on an exam. A generic tutor, however accurate, was not enough.\nโ \nStudents also told us they sometimes doubted the AI's feedback and wished their professor could step in to confirm, correct, or nuance what the AI said. This was the moment we realized that building an AI tool is not enough. A professor will not trust a system they cannot oversee. And nobody is more skeptical than a professor, rightly so.\nโ \n\nโ \n ๐ง๐ต๐ฒ ๐ฝ๐ฒ๐ฟ๐๐ฝ๐ฒ๐ฐ๐๐ถ๐๐ฒ๐ ๐๐ถ๐๐ต๐ถ๐ป ๐ผ๐๐ฟ ๐๐ฒ๐ฎ๐บ\nโ \nThese findings shaped our most important team debate: how much should the professor see and control?\nโ \nInitially, Andelson had designed strict privacy protections: the professor could only access aggregated statistics, never individual conversations. His reasoning was sound under GDPR: a student who feels observed will self-censor, and self-censoring kills learning.\nโ \nBut Candis brought the perspective of a teacher. Students doing assigned exercises want to be followed. They want their professor to see their effort, correct the AI when it is wrong, and comment on their reasoning. An assigned exercise is a digital copy, not a private diary. Students themselves asked for this.\nโ \nWe resolved this by putting the choice in the student's hands. Two modes: private mode for free practice where the professor sees nothing, and shared mode for assigned exercises where the professor can follow the work and respond. The student always knows which mode they are in. The consent is free, informed, and specific to each exercise.\nโ \nOn the choice of AI model, we navigated the tension between performance and sovereignty. The most accurate model for mathematical reasoning is not European. Candis was clear: for a tool diagnosing reasoning errors, accuracy is an ethical obligation. We chose the best model available but designed the architecture so that migration to any alternative takes seconds, and all data processing stays within Europe.\nโ \nOn ethics, Andelson pushed us beyond discussion into implementation. Informed consent at signup. A page showing each student exactly what the system knows and what the professor can see. One-click data deletion and export. Cognitive profiles never used for automated decisions or grading, in line with the EU AI Act's requirements for high-risk AI systems in education. A risk assessment for data protection built into the app as a visible page, not a buried document.\nโ \n\nโ \n ๐ช๐ต๐ฎ๐ ๐๐ฒ ๐ฝ๐ฟ๐ผ๐ฝ๐ผ๐๐ฒ\nโ \nOur contribution is a set of evidence-based recommendations for introducing AI tutoring responsibly in higher education, grounded in what we learned by building and testing a prototype that Johanna developed from the ground up and that the team then tested with real students.\nโ \nOur core conviction: AI in education should be designed as a space where the student writes their reasoning freely, makes mistakes, and learns from errors. The AI analyzes the reasoning after the student produces it, identifies the type of error, and asks a question to guide the student toward finding the answer themselves. It never gives the answer directly. The professor calibrates the AI to their pedagogy, follows assigned work, and corrects the AI when it is wrong. The AI proposes. The professor decides. This is not a limitation. It is the design.\nโ \nBased on what we learned, we propose five recommendations for universities and policymakers:\nโ \n ๐๐ถ๐ฟ๐๐, ๐ฐ๐ผ๐ด๐ป๐ถ๐๐ถ๐๐ฒ ๐ฎ๐๐๐ผ๐ป๐ผ๐บ๐ ๐ฏ๐ ๐ฑ๐ฒ๐๐ถ๐ด๐ป. Any AI tutoring system should reduce its own helpfulness over time. The goal is a student who reasons well without AI, not one who performs well with it.\nโ \n ๐ฆ๐ฒ๐ฐ๐ผ๐ป๐ฑ, ๐ด๐ผ๐๐ฒ๐ฟ๐ป๐ฎ๐ป๐ฐ๐ฒ ๐ณ๐ผ๐ฟ ๐ฐ๐ผ๐ด๐ป๐ถ๐๐ถ๐๐ฒ ๐ฑ๐ฎ๐๐ฎ. This data should never be used for grading, selection, or institutional decisions, in compliance with the EU AI Act. Students should see exactly what is collected, control their data, and be able to delete everything.\nโ \n ๐ง๐ต๐ถ๐ฟ๐ฑ, ๐ฒ๐ป๐๐ถ๐ฟ๐ผ๐ป๐บ๐ฒ๐ป๐๐ฎ๐น ๐ฎ๐ฐ๐ฐ๐ผ๐๐ป๐๐ฎ๐ฏ๐ถ๐น๐ถ๐๐. Universities procuring AI tools should require transparency about energy sources, infrastructure efficiency, and carbon cost per interaction.\nโ \n ๐๐ผ๐๐ฟ๐๐ต, ๐บ๐ฎ๐ป๐ฑ๐ฎ๐๐ผ๐ฟ๐ ๐๐ฒ๐ฎ๐ฐ๐ต๐ฒ๐ฟ ๐ฝ๐ฟ๐ฒ๐ฝ๐ฎ๐ฟ๐ฎ๐๐ถ๐ผ๐ป. Professors must understand what AI tools show and what they do not show before any deployment. Without training, the best tool can be misused.\nโ \n ๐๐ถ๐ณ๐๐ต, ๐ฒ๐๐ถ๐ฑ๐ฒ๐ป๐ฐ๐ฒ ๐ฏ๐ฒ๐ณ๐ผ๐ฟ๐ฒ ๐๐ฐ๐ฎ๐น๐ฒ. No educational AI tool should be deployed widely without controlled trials measuring actual learning outcomes.\nโ \nLooking ahead, Johanna is already working on the next evolution: full professor calibration, where the professor provides their course material and the AI uses their definitions, their notation, and their progression. The AI does not follow generic rules. It follows this professor's pedagogy, for this class, at this point in the course. This is the level of trust that would make even the most skeptical professor consider using the tool. We also plan to adapt T-Twice for students with learning disabilities such as dyslexia, dyscalculia, and dysorthographia, drawing on Candis's training in teaching these profiles. Mathematical reasoning is not less important for these students; it is harder to express, and the tool should help, not hinder.\nโ \nWe are clear-eyed about what remains to be solved. AI models can make occasional errors, but as one professor pointed out to us, even these errors can become learning moments: a student who catches the AI making a mistake is developing exactly the critical thinking we want to build. This is why human oversight must always be part of the design. Our error classification needs validation from mathematics education researchers to move from promising to proven. And no tool, however well-designed, can substitute for a teacher who inspires or address structural inequalities between institutions. These are not reasons to stop. They are reasons to test rigorously, iterate openly, and never deploy without human oversight.\nโ \n๐๐ผ๐ ๐ฐ๐ฎ๐ป ๐๐ ๐ต๐ฒ๐น๐ฝ ๐๐๐๐ฑ๐ฒ๐ป๐๐ ๐ฟ๐ฒ๐ฎ๐๐ผ๐ป ๐๐ถ๐๐ต๐ผ๐๐ ๐ฐ๐ฟ๐ฒ๐ฎ๐๐ถ๐ป๐ด ๐ฑ๐ฒ๐ฝ๐ฒ๐ป๐ฑ๐ฒ๐ป๐ฐ๐, ๐ด๐๐ถ๐ฑ๐ฒ ๐๐ถ๐๐ต๐ผ๐๐ ๐ต๐ผ๐บ๐ผ๐ด๐ฒ๐ป๐ถ๐๐ถ๐ป๐ด, ๐ฎ๐ป๐ฑ ๐ฝ๐ฒ๐ฟ๐๐ผ๐ป๐ฎ๐น๐ถ๐๐ฒ ๐๐ถ๐๐ต๐ผ๐๐ ๐๐๐ฟ๐๐ฒ๐ถ๐น๐น๐ถ๐ป๐ด? We believe this question deserves collective deliberation, and we look forward to engaging with other perspectives throughout this challenge.</div></dd>\n</dl></xml>"},"title":{"fr":"T-TWICE : Mathematical Cognitive Reasoning Engine"}}
This fingerprint is calculated using a SHA256 hashing algorithm. In order to replicate it yourself, you can use an MD5 calculator online and copy-paste the source data.
Share