A growing number of college students are relying on generative AI tools like ChatGPT for academic tasks, sparking concerns over AI dependency. This article explores how AI is transforming homework completion, the challenges it presents for educators, and how universities are responding with policies to ensure academic integrity.
"Almost everyone around me is using AI," "Using AI for homework saves at least half the time"...
As the wave of generative AI sweeps across industries over the past two years, its role as an "AI assistant" has become increasingly prominent. Many students and professors in universities have grown accustomed to using various large models available in the market for tasks like paper editing and coding.
Xiao Jie, a third-year student majoring in data journalism at a 211 university in Beijing, shared her experiences with Jiemian News. Her coursework often involves coding, such as web scraping and quantitative research methods in papers. She remarked that AI has exponentially improved her efficiency in completing assignments.
For instance, when building a webpage, as long as she understands the basic structure of HTML, she can describe her needs to AI and get results within seconds, then copy and modify the output as needed. If problems arise, she can further inquire with AI. Xiao Jie likens AI to a teacher who is always available to answer questions.
Regarding AI's current level of intelligence, she believes it is capable of producing high-quality static webpages and even some dynamic interactions. However, outcomes depend on the user's skill level, with computer science majors likely achieving better results with AIGC.
While large models are stronger in semantic expression than in mathematical reasoning, they are often criticized for producing content that "sounds logical but is actually nonsense."
Freshman Yang Liu is more cautious about letting AI generate entire essays. She typically uses AI to build a framework, then selects and revises the content. She is also wary of relying on AI-provided information, often tracing back to original sources for verification. For less critical sections like summaries and research purposes, Yang Liu allows AI to generate content, which she then modifies.
Having grown accustomed to using AI in academic life—especially ChatGPT—Yang Liu chose to continue paying for the service after OpenAI introduced fees. The paid version unlocks advanced models like GPT-4 and plugin modes, while the free version is limited to GPT-3.5.
Rising Costs and Group Subscriptions
Subscription costs vary, with ChatGPT Plus priced at $20 per month and ChatGPT Pro at $200 per month, offering unlimited access to advanced models and features like GPT-4o and advanced voice tools. For many college students, these prices are steep, leading to the trend of "membership sharing" among peers. Posts seeking group subscriptions frequently appear on social media.
Some individuals have even found business opportunities in this trend, selling shared accounts on e-commerce platforms under disguised names like "Ultraman 4o." Daily passes cost around 1 RMB, while monthly passes are approximately 60 RMB.
Xiao Jie views membership sharing as mutual assistance among classmates, but Yang Liu opts out due to concerns over data privacy and the inability to track historical records. Many students have also reported scams related to membership sharing on social platforms.
Teachers’ Concerns Over AI Dependency
As exam weeks and end-of-term seasons approach, students increasingly rely on AI to complete assignments, prompting concerns from educators. Initially, students used AI for simple tasks like editing, but now some are losing the ability to write creatively and think critically without AI assistance.
Xiao Jie admitted to initially feeling anxious about AI potentially replacing many aspects of her academic life. Now, she acknowledges her dependency on AI, often consulting it first when faced with problems. However, many answers still require human interpretation.
She also noted that as AI technology advances, teachers may raise their standards for assignments. Currently, while AI improves her efficiency in gathering information, professors expect original perspectives, which AI, being reliant on existing data, struggles to provide.
Wen Xin, a professor at a 211 university in Beijing, echoed this concern in an interview with Jiemian News, stating, "AI-generated answers may not be incorrect, but they lack substance."
For Wen Xin, the purpose of college courses—especially in humanities and social sciences—is not to instill rote knowledge but to build a broad knowledge system. Students are expected to identify their interests within this framework through personal reading and discussions, then explore further. Hence, choosing topics that genuinely interest them is crucial.
"I hope to see students working on topics they are genuinely passionate about, even if their thoughts are immature but sincere. Such personal reflections are meaningful and sustainable," Wen Xin remarked. AI-generated responses, however, often come across as reasonable but clichéd, which runs counter to the university's ethos of fostering "independent spirits and free thought."
Wen Xin emphasized that AI should be regarded as a tool in the learning process—neither the starting point of ideas nor the endpoint of completed work.
Policy Developments and Academic Integrity
Several Chinese universities have introduced policies to limit AI use. Fudan University recently implemented strict rules, including a "six prohibitions" policy, banning the use of AI tools in key stages of undergraduate thesis writing, such as designing research plans, innovative methods, algorithm frameworks, and generating text. This represents one of the most stringent AI usage policies globally.
Some experts, like Guo Yingjian from Renmin University, have praised these guidelines while questioning certain aspects. For instance, prohibiting AI tools for language editing and translation might deprive students of opportunities to improve their academic expression.
Other universities, such as East China Normal University and Beijing Normal University, have issued AIGC usage guidelines requiring clear labeling of AI-generated content, with direct AI contributions capped at 20% of the total.
Fujian University uses an "AIGC detection system" to analyze student submissions, while Shenyang Agricultural University sets a threshold of 40% for AI-generated content in assignments.
Internationally, universities like Cambridge and Harvard have also addressed AI use, focusing on educating students on appropriate utilization rather than outright banning it. Both institutions stress the importance of academic integrity, requiring students to disclose AI use and integrate AI-generated content critically rather than using it as a substitute for independent thought.
The Path Ahead
"AI is a universal tool, yet its use is highly personal. Current detection methods cannot definitively assess how one uses AI," Wen Xin observed. She concluded that the key lies in balancing AI’s utility without becoming overly reliant on it. As both AI and human experiences with it evolve, new insights and guidelines will likely emerge.