Author: Xinxin
AI tools driven by large models are entering life, and the loudest voices might not be from white-collar workers in offices, but from students in schools, because generating an essay or short paper with ChatGPT is incredibly easy.
Because of this, after the emergence of large models, many teachers' quickest reaction was to "ban AI" or redefine cheating norms.
But some people quickly realized that the real danger might not be cheating, but that students are comprehensively outsourcing the "learning process" of their brains to AI.
It seems that doing homework has become easier, and grades have improved. But an anxiety-inducing question also simultaneously appears: When students increasingly rely on AI for writing, answering questions, summarizing, and thinking, are they actually "learning"?
Or, in the AI era, do students still need to "learn"?
Education happens to be the third case. Nicholas Carr claims that students themselves are in the process of learning new skills and have not yet mastered them. If AI "takes over" tasks before students gain experience - whether solving math problems or writing papers - genuine skill growth will be hindered.
Image source: substack
Furthermore, when a person rarely "thinks for themselves", they may even struggle to write a good prompt in an AI dialogue window. Not to mention verifying and improving AI output, these meta-skills depend on the user's underlying understanding of the subject.
Timothy Burke, a history professor at Swarthmore College, wrote that "to truly leverage current and near-future AI generative tools in research and expression, you must understand a lot yourself."
"Just like if you don't know what to look for or what a card catalog is, you can't use it to find information; or like when Google search was most useful, if you didn't know how to modify keywords, narrow the search range, or optimize the next search by picking useful content from the previous one, you couldn't use it effectively."
Education is such a field. When elementary students are just learning to read, AI can write book reports; when middle school students are just learning argumentation, AI can generate sophisticated essays with one click; when college students are just starting research, AI can automatically provide outlines, analyses, summaries, and citations.
These skills are replaced before they can be mastered. As a result, students not only "forget how to do" something but "never learned how to do" it, and sometimes aren't even aware that the copied content is an AI-generated "hallucination".
For instance, in AI Coding, if students always let AI write code while skipping actual programming learning, they might lack sufficient programming knowledge to debug or improve AI's output when it's incorrect.
Nicholas Carr says it's like a generation of pilots who only know how to use autopilot - they can fly routine flights, but are helpless in emergency situations requiring manual control.
"We've been focusing on how students use AI to cheat. We should be more concerned about how AI betrays students," Nicholas Carr stated.
"With generative AI, a student who would normally be at B-level can write an A-grade work while becoming a C-level student."
04 Learning and Education Paradox
Traditional education has a basic assumption: if a student can submit a good essay, it means they can write; if they can solve a difficult problem, it means they understand the formula; a high score means mastering knowledge.
Currently, AI seems to have changed this logic. In non-strict closed-book exam scenarios, writing well doesn't necessarily mean being able to write, and a high score doesn't mean being able to do something - the result no longer equals the process.
Now, a perfect assignment might be written by ChatGPT; a logically rigorous paper might be drafted by DeepSeek; a high score might just be the result of skillful prompt usage. Sometimes, truly understanding isn't necessary - just using the tool, "outsourcing" the learning process to LLM is enough.
Campuses worldwide have responded to this issue, from classroom bans to using AI detection systems like GPTZero and Turnitin, but these measures are ineffective and can sometimes "harm" diligent students. Some students have even become clever, asking AI to "dumb down" output to make assignments look more like student work, knowing many teachers believe "students can't write this perfectly".
In Asian countries, China's Ministry of Education released the "Guidelines for Generative AI Use in Primary and Secondary Schools" in May this year, warning against over-dependence on AI tools. Primary school students are prohibited from independently using open-content generation functions, but some auxiliary teaching uses are allowed, balancing different educational stages.
At the university level, Fudan University issued regulations prohibiting AI from participating in six parts of undergraduate theses involving originality and innovation, allowing its use for literature retrieval and code debugging. Other universities have similar regulations. A professor at Nanjing University's Literature Institute even gave zero points to a student for using AI to summarize "Dream of the Red Chamber".
Japan emphasizes caution for younger users, requiring "prohibition of using AI to complete assignments" and warning about thought suppression. They caution that without proper safeguards, early AI introduction might "stifle students' creativity and learning motivation".
In Africa, where educational resources are more scarce, some educators initially viewed AI as a "shortcut", but concerns exist. An African education journal's title posed the question: "ChatGPT - Cheating Tool or Learning Enhancement Opportunity?"
Image source: bizcommunity
In North America, the initial reaction was also intense. In early 2023, some regions blocked ChatGPT on school devices due to AI misuse concerns. A Pew Research Center survey showed only 6% of US K-12 teachers believed AI benefits outweighed drawbacks, with a quarter believing drawbacks outweighed benefits, and most teachers remaining in a state of anxiety and confusion.
But this wave couldn't be stopped. Bans began transforming into guidance, and currently, US universities are collaborating with tech companies like OpenAI and Anthropic to proactively introduce "educational AI".
Europe shows similar trends. Estonia, through its "AI Leap" national program, provides AI tools for students and teachers. Some UK universities have established principles encouraging responsible AI use, and some English schools are trialing AI classroom assistants.
This year, as evidence of AI's disruptive impact on education mounted, OpenAI CEO Sam Altman announced providing free ChatGPT Plus access to North American students for a certain period.
Nicholas Carr stated:
"In AI companies' eyes, students are not learners, but customers."
Image source: X
For this AI vs. Education challenge, various regions have different responses, such as redesigning assignments, face-to-face defenses, classroom writing, restoring more paper-and-pen exams to ensure they're testing students, not "students+AI".
Some educators have realized that the crisis isn't just widespread cheating, but the potential collective atrophy of "thinking muscles" under AI assistance. Moreover, those skeptical of change might ask:
When AI can complete tasks for you, what do students actually need to learn?