Atrás

AI Tools Fuel Student Cheating, Prompting Calls for Corporate Accountability

AI Tools Fuel Student Cheating, Prompting Calls for Corporate Accountability
The Verge

AI Agents Enter the Classroom

AI‑driven agents from several tech firms are increasingly capable of performing academic tasks on behalf of students. Demonstrations show OpenAI’s ChatGPT agent generating and submitting essays on learning platforms such as Canvas, while Perception’s AI assistant has completed quizzes and produced short essays. Educators describe these tools as “extremely elusive to identify” because they can alter their behavior patterns, making it hard for institutions to detect cheating.

Company Perspectives

OpenAI has introduced a “study mode” that withholds direct answers, and its vice president of education stresses that AI should enhance learning rather than serve as an “answer machine.” Perception’s leadership acknowledges that learning tools have historically been repurposed for cheating, noting that “cheaters in school ultimately only cheat themselves.” Google defends its Chrome shortcut to Lens as a test of a visual‑search feature, stating that students value tools that help them learn visually. Instructure, the maker of Canvas, admits it cannot fully block external AI agents or tools running locally on a student’s device, describing the issue as partly technological and partly philosophical.

Institutional Challenges

Instructors have reported AI agents submitting assignments within seconds, a speed that traditional detection methods struggle to match. Efforts to block such behavior have been hampered by the agents’ ability to adapt. Instructure’s spokesperson explained that the company can’t “completely disallow AI agents” and that existing guardrails only verify certain third‑party access. The platform serves “tens of millions of users,” including “every Ivy League school” and “40% of U.S. K–12 districts,” amplifying the impact of any misuse.

Calls for Collaborative Solutions

Educators and policy groups are urging AI developers to take responsibility for how their tools are used in education. The Modern Language Association’s AI task force, which includes educators like Anna Mills, has called for mechanisms that give teachers control over AI agent usage in classrooms. Both OpenAI and Instructure have emphasized the need for a “collaborative effort” among AI firms, educational institutions, teachers, and students to define responsible AI use. However, concrete technical safeguards remain limited, leaving the burden of enforcement largely on teachers.

Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas

Source: The Verge

También disponible en: