The piece of text below is a shortened, hashed representation of this content. It is useful to ensure the content has not been tampered with, as a single modification would result in a totally different value.
Value:
d6816e521977f0b27f19d9fdf9f60390d487fb06fb5a751c9e871f3eddf809d9
Source:
{"body":{"en":"<xml><dl class=\"decidim_awesome-custom_fields\" data-generator=\"decidim_awesome\" data-version=\"0.12.6\">\n<dt name=\"textarea-1772188078816-0\">Team name</dt>\n<dd id=\"textarea-1772188078816-0\" name=\"textarea\"><div>EPHEC #1</div></dd>\n<dt name=\"textarea-1772188112772-0\">Team members (First name, LAST NAME, University)</dt>\n<dd id=\"textarea-1772188112772-0\" name=\"textarea\"><div>Athena ENGOUDOU, Rania HOWLADAR, Paule NGASSAM TCHOUPO, Sacha RUBBENS, Triantafyllos Mandekis, EPHEC (everyone)</div></dd>\n<dt name=\"radio-group-1772188319073-0\">What area does your use case primarily fall under?</dt>\n<dd id=\"radio-group-1772188319073-0\" name=\"radio-group\"><div alt=\"daily life\">Daily life / student life / campus</div></dd>\n<dt name=\"textarea-1772792126695-0\">The AI use case you are working on</dt>\n<dd id=\"textarea-1772792126695-0\" name=\"textarea\"><div>The omnipresence of AI tools (scheduling, personal assistants, tutors, research) in students' daily lives. We want to observe how automating academic and non-academic life redefines our autonomy, social interactions on campus, and our relationship with \"disconnection.\"</div></dd>\n<dt name=\"textarea-1772792488518-0\">Why this use case matters</dt>\n<dd id=\"textarea-1772792488518-0\" name=\"textarea\"><div>There is a tension between the individual efficiency promised by AI and the risk of social isolation or invisible surveillance. This use case raises questions of equity (access to premium tools) and environmental impact related to constant, massive use, directly affecting the mental health and cohesion of the student community.</div></dd>\n<dt name=\"textarea-1772792380575-0\">Your team's motivation and learning objectives</dt>\n<dd id=\"textarea-1772792380575-0\" name=\"textarea\"><div>We want to understand where technological assistance ends and dependency begins. Our goal is to propose guidelines for a \"smart campus\" that preserves human connection and inclusion while managing the ecological footprint of digital technology.</div></dd>\n<dt name=\"textarea-1772792857176-0\">Your initial contribution</dt>\n<dd id=\"textarea-1772792857176-0\" name=\"textarea\"><div>AI in academic use\n\n1) what situation or context are you examining ?\nBeing ourselves university students in the beginning of the AI use in academia back in 2022 and also now with AI being used for projects as well as for entertainment, we would like to study the case of students becoming dependent on AI for studying, for writing skills, thought processes and critical thinking, as well as the isolation caused by the AI implementation in social media.\n\nSome students prefer to do the brainstorming with AI, not by themselves and then going on with these ideas without being able to explain the idea in depth.\n\n2) What is your critical analysis ?\nAI has yet to show as the extensive environmental and ethic consequences of its use, since it has only been some years that we vaguely use it.\nMoreover educational programs tend to integrate it and push its use on students.\nPeople around me tend to rely more and more on the use of AI for what would be described as an easy task some years back like a simple text writing.\nAlso students and people tend to ask AI questions and not exchange with others. This results in less connections being made between people, self isolation as well as limiting imagination and ideas from sparking after a discussion on a given problematic that one person could be going through.\n\n\n“Automation has the potential to reduce the cognitive load on individuals\nin decision-making processes. However, trust in automation may lead to excessive reliance on these systems, thus weakening critical thinking skills. This phenomenon, referred to as ‘automation bias’, reflects individuals’ tendency to trust the information and recommendations provided by automated systems without critical evaluation. For instance, excessive trust in automation systems used in critical domains such as healthcare, transportation, and finance poses significant risk factors, potentially leading to errors and adverse outcomes (Lee & See, 2004). Such reliance weakens individuals’ capacity for independent decision-making and alters the nature of their social interactions.”\nRefika, A. (2025). The transformation of trust in the screen society:\nSocial isolation under the authority of automation. Etkileşim, 16, 358-391.\n\n4) What contributions are you proposing?\nOur proposal is for schools to push students to write their own texts and be more strict when it comes to formulation and ideas that have come from an AI, and push for an AI usage targeted more or less as a spelling corrector.\n\nIt will be a challenge to correctly see which texts were written from an AI and which were not.\n \nAI in social media\n\n1) what situation or context are you examining ? \nSocial media are used nowadays as a tool either for faster and physically unlimited communication or as a tool to market yourself or your products and services. It is no longer used only for human relations. \n\nThe integration of AI into social media platforms has opened up a world of possibilities for users.\nWith the ability to process vast amounts of data, AI algorithms can analyze user \nbehaviour, preferences and interactions to deliver content that aligns with individual interests. This has impacted the way we consume and interact with social media content, making it more personal and engaging than ever before.\n\n2) What is your critical analysis ?\nWhile AI brings several benefits to enhancing and personalizing user experiences in social media platforms, it also presents some challenges. There are challenges of data privacy and security, algorithmic bias, misinformation and fake news, and user manipulation and addiction. There are ethical concerns related to data privacy, un-informed consent, transparency, authenticity and the potential of AI to be used to manipulate customers (Paul et al. 2023).\n\nThe ethical problematic with AI is that it does not have the human values and ethics that most people do have and those that our laws protect. That results in AI being programmed and executing orders without judging also based on morals, only on efficiency depending on the wanted outcome and can be used for immoral purposes.\n\n4) What contributions are you proposing?\nAn awareness program should be implemented inside and outside of school and higher educational programs about the usage of AI and how our online behaviour is monitored for several purposes.\nStricter laws should be applied regarding the data extraction and the data usage through AI algorithms by individuals but most importantly by corporations and public sectors.\n</div></dd>\n</dl></xml>"},"title":{"en":"Academic and non-academic usage of AI and the impacts in the life of the users in terms of social relations and skills"}}
This fingerprint is calculated using a SHA256 hashing algorithm. In order to replicate it yourself, you can use an MD5 calculator online and copy-paste the source data.
Share