No more waiting. No more struggling.
Some of you might remember those late nights filled with chatter, coffee, and collective panic before an exam. Then came the rise of internet chatrooms, message boards, and endless social feeds. If you step into a dorm today, you’ll see fewer and fewer of those scenes.
Instead, you’ll witness students sitting alone in their dorm rooms, whispering into the night: “Which of these—A, B, C, or D—is the right choice?” You can almost hear ChatGPT softly chime back, “C,” watch the student select bubble C, and move on to the next question. All around them, untouched Scantron sheets and neon flashcards lie abandoned.
In the last few years, AI has been reaching quietly but insistently into your kid’s classroom, or creeping silently onto your office desktop, where everyone furtively hides their chat windows from peers, colleagues, and supervisors alike.
Picture this: your daughter conjuring a polished term paper in mere minutes, or your manager having your performance review “self-written” by a bot. But beneath these neat solutions, it isn’t the technology that I will cover today. It is how it changes the ways students learn and network. It is bigger than an individual, and more than data.
Yes, I will start with numbers today to set the stage. However, there is so much more to it in the second section of this article. Together with the latest qualitative research, like displaying live images to you, I will show you how students feel under the name of efficiency.
In the end, I will expand it back to the workspace. I want to bring up two other reports, one by Microsoft and one by Slack, which have already observed a similar impact on the workspace since last year.
We’re all doing it, we all feel the same unease, and we’re drifting further apart from one another as we hide behind our screens.
TL;DR
AI on autopilot: An analysis of 1 million student chat sessions with Anthropic’s Claude AI found students mostly offloading higher-order thinking to the bot (writing code, drafting essays, solving problems) rather than just fact-checking. Nearly half the time, students simply threw a question at Claude and accepted the answer. Great for efficiency; not so great for building understanding.
Peer help is dying: In interviews with 17 college CS students, most admitted they now turn to tools like ChatGPT before asking classmates or TAs for help. Some even redirect their friends’ questions to AI (“Have you tried ChatGPT?”) instead of helping themselves. The result: fewer study sessions, less brainstorming together, and a breakdown of peer support networks.
Mentorship & belonging suffer: With GenAI as the go-to helper, students interact less with professors, tutors, and older peers. This means missing out on the “hidden curriculum” – those unwritten tips and encouragement you get from mentors. Many interviewed students felt more isolated and demotivated as their social learning communities faded. Quick answers are up, sense of belonging is down.
Shall we?
Students and Their AI Friends
Here's another scene of a college student in 2025.
It’s 2 AM, a paper is due tomorrow, and instead of panicked texting a friend or pulling an all-nighter… The student whispers to their laptop late at night, ChatGPT’s reply flickering on the screen, while their textbook remains unopened.
According to a new report from Anthropic (Anthropic Education Report: How University Students Use Claude), this scenario isn’t science fiction; it’s incredibly common.
They analyzed one million real-world anonymous student conversations with Claude to answer the following questions:
Is AI usage in Education rising?
Who’s Using Claude?
How Are Students Using Claude?
Are they using Claude to cheat on homework or genuinely to learn?
What do all these mean to students’ critical thinking skills?
How Did Anthropic Study 1 Million Student Chats?
Behind the scenes, Anthropic’s Clio sliced and diced 1 million student conversations—and uncovered how students truly use AI to study (or dodge it).
Grab an 18-day snapshot of student chats
They pulled every conversation on Claude over an 18-day window, but only kept chats tied to higher-ed email addresses (like .edu and .ac.uk).Auto deep-dive with Clio’s four-step pipeline
Clio pulls out key information (“facets” like topic, turn count, language), groups similar chats into themes, then wires them up into a hierarchy so researchers can drill down by topic, language, or other dimensions. See the 4-step Clio workflow infographic:Mapped each cluster to NCES discipline categories (U.S. bachelor-degree data) such as CS, Humanities, Business, etc.
Interaction-style classification. Fed each conversation into Clio again to tag it as one of four quadrants.
Applied Bloom’s Taxonomy to label chats by cognitive depth (Remembering → Creating).
The noise fell away, leaving 574,000 laser-focused academic exchanges. Here’s what they’ve found, and I think it's worth sharing with you.
The AI Adoption Gap Between STEM vs. Non-STEM
What did they find?
Computer Science majors make up 38.6% of Claude’s educational chatter, despite being just 5.4% of U.S. bachelor’s grads. Meanwhile, Business and Health majors barely show up.
Why CS students dominate
Familiarity breeds usage. They’re already knee-deep in code and AI libraries, so Claude fits right into their workflow.
Perfect task-tool match. Programming assignments map neatly onto an AI assistant that can debug, explain, and even generate snippets.
Cultural momentum. STEM’s always been eager to adopt anything that boosts productivity—Claude is just the latest shiny object.
One professor I spoke with put it bluntly: he’d watch students eagerly dump their assignments into a code generator, then sit back, eyes glazed, accepting the machine’s neat solution without so much as skimming the logic.
They skip the head-scratching of tracing loops or hunting down off-by-one bugs; they just hit “accept” and move on.
How Students Collaborate With AI?
The researchers also reveal how students interact with AI tutors in four distinct ways. Some productive, others problematic.
Let’s say a student is staring at a blank screen with AI on their browser, the chances are, they are doing one of the following:
"Just Give Me Answers" Mode (Direct Problem Solving) On average, 25% of interactions.
"Solve question 3b" requests dominate here. Like using a calculator for 2+2, it gets results but skips learning. The report found instances of students asking Claude to "provide answers to a multiple-choice question" verbatim.The Ghost Writer (Direct Output Creation). around 24% of interactions.
From essays to marketing copy, students outsource creation. While some use this ethically (e.g., "generate a study guide template"), others cross lines with requests like "rewrite this to avoid plagiarism detection."Help Me Understand… (Collaborative Problem Solving). around 26% of interactions.
Here, AI becomes a patient tutor. A biology student might ask, "Explain why my cell division analysis is wrong," mirroring office hours conversations. These exchanges show promise for personalized learning.Brainstorm With Me… (Collaborative Output Creation). About 25% of interactions.
Students co-create complex work like lesson plans: "First outline climate change concepts, then suggest activities." This mirrors group project dynamics but risks over-reliance on AI input.
You probably have noticed that, in total, about 47% of messages asking an AI to directly solve a problem or produce a creative output lean on quick fixes.
Minimal learning, maximum output.
That said, context matters enormously.
So the researchers can’t always be certain that asking Claude to "explain Kant's philosophy" is cheating or studying. It all depends on whether it's for an exam or personal curiosity.
As an example from the report:
solve probability and statistics homework problems with explanations
This could equally be a student trying to develop understanding or bypass it…
There’s no easy answer – one student’s learning aid is another’s academic shortcut.
When AI Takes Over The Higher Order Thinking…
Look at the image below and you’ll spot a red flag: students skip straight to analysis and creation with AI, bypassing recall and comprehension. Experts warn that this inversion of Bloom’s Taxonomy undermines foundational learning.
The data tells a compelling story: Nearly 40% of the chats were about "Creating" tasks (the highest cognitive function) and 30% on "Analyzing" complex information.
Think questions like
Provide answers to machine learning multiple-choice questions
or
Provide direct answers to English language test questions
If these aren’t shocking enough… what about this one?
Rewrite marketing and business texts to avoid plagiarism detection
The rest of the chats presumably had more back-and-forth, using AI more like a Socratic tutor. Claude, to its credit, dutifully obliged. At least the report doesn’t sugarcoat the risk: when students offload the hard work to AI, they short-circuit the very point of education. In plain terms, if you let the robot do all the practice, you don’t learn to ride the bike.
As Anthropic's researchers warn:
An inverted pyramid, after all, can topple over.
While students might still engage with these higher-order skills alongside AI, many educators worry about the "cognitive crutch" effect.
When students routinely outsource their most complex thinking tasks, they are building expertise in managing AI outputs rather than developing their own analytical and creative abilities. Perhaps this is what the future needs? Never say never.
Then my question would be, how do they judge an outcome w/o even knowing what the right answer is?
Like muscles that atrophy without use, cognitive skills need regular exercise to develop fully.
Thinking is something you either use or lose. Read my article about
Traditional assessment focuses on measuring how well students reach predetermined answers. But in a world where AI produces " potentially correct" outputs, we must shift toward evaluating process, creativity, adaptability, and metacognition—qualities that may prove more valuable than content knowledge alone.
Here’s another research on skill development, which shows that mastery requires deliberate practice, productive struggle, and building complex mental representations.
I guess the idiom, Learn it the hard way, came from some wisdom.
Why AI Isn’t Just Shortcut 2.0?
But hold on, you might ask: haven’t students always tried to game the system?
You and I already tried Googling during an at-home or an open-book test, or some of us even had a calculator hidden in our palms or tried to get answers from classmates via social media… So, how’s this different?
There is a difference: scale and subtlety.
Copying from Wikipedia is blatant; you still need to read the source, knowing what you’re looking for, maybe even consult your peers if you aren’t sure about the search result, then tweak the answers so they read properly and fit in as part of your answer.
On the other hand, getting a perfectly structured essay from Claude feels like magic. It’s easy, it feels intelligent, you don't even need to wait for answers from your friends, and it’s alarmingly hard to detect, which brings us to the next study …
If we’re all dodging the ‘cringe ask’ and turning to AI instead, fewer of us phone a friend for help… Then what happens to the friendships built on those small risks?
And if this played out at scale, how would our entire social network be reshaped?
The Silent Classroom. Beyond Quantitative.
Think of the quantitative Anthropic study as a map with all the data you need to navigate. It charts who leans on AI, what they use it for, and how heavily they rely on it, revealing the contours and traffic patterns, but none of the stories from the road.
But the real gut-punch comes when you ask how this reshapes the landscape of learning and socializing with peers.
That’s where the second study comes in, see it like a travel diary, not a map. Instead of measuring distance, it captures the lived experience: the confusions, the struggles, and the conversations (or the lack of) along the way.
This qualitative study: “All Roads Lead to ChatGPT”: How Generative AI is Eroding Social Interactions and Student Learning Communities
The researchers interviewed 17 computing undergrads from seven R1 universities. Through 30–45-minute sessions, they try to answer the following:
How does AI change how students seek peer, mentor, and instructor help?
What happens to the “hidden curriculum” when students turn to AI instead of each other?
How does reliance on AI affect students’ sense of community, motivation, and self-worth?
Are we trading the messy, rewarding process of learning together for efficient—but—lonely problem solving?
The Lingering of Shared Struggle
Once, you'd huddle over a tricky problem alongside your classmate; now you're just ChatGPT's proxy.
Student 16 captures it:
Sometimes, they [peers] would ask me a question, and I would ChatGPT it and give it back. They’re like, ‘Thank you, you helped me so much!’ I’m like, ‘I did nothing.’ It’s such a thing now.
Coincides with thirteen of 17 students reporting this relay race—even those who rarely use AI themselves become middlemen.
The convenience of AI is hard to ignore. As another student described:
There’s a lot you have to take into account: you have to read their tone, do they look like they’re in a rush…versus with ChatGPT, you don’t have to be polite.
In other words, there’s no need to worry about whether a friend is too busy to help—AI is always available, day or night.
But this 24/7 access comes at a cost. One reflected on what’s lost, recalling “…sense of comfort, knowing that my friend will be able to help me…” Those late-night sessions where “you know you’re both suffering in the assignment” are fading, replaced by the instant, solitary help of AI.
So while AI makes it easier to get answers anytime, it’s also quietly erasing the shared struggles that once brought students together.
The Unimpressive Uniformity of AI
AI’s polished answers flatten the beautiful chaos of human thinking. Student 11 calls it the "Times New Roman syndrome."
AI will only give you the direct best answer…which will work. But it can’t give you the different style of programming that humans have. My friends will have a different style of coding than I will. It’s like handwriting, which is something AI can’t replicate. AI will only give you Times New Roman, and like, people will give you handwriting.
You can think of this as learning to cook by microwaving a Tesco meal or beans on toast (sorry, allow me to make fun of it while I’m still here).
Every meal arrives plated, hot, and ready to eat. You never chop an onion, taste as you go, or burn a dish and learn from it. Sure, you’re always fed, but you miss out on the mess, the experiments, and the shared laughter in the kitchen that turn recipes into food with a signature.
The Mentorship Vacuum
The impact of AI on campus isn’t just about peer-to-peer help—it’s changing the entire academic hierarchy. As a fourth-year student put it, there’s now
… a lot less interaction between entry-level and more experienced [students]…There’s this disconnect: an over-reliance on AI and not really understanding problems and not asking people who actually work in the field for help.
This means students are missing out on the “hidden curriculum”.
The unwritten survival rules, like don’t just pick classes that sound easy… ask upperclassmen which professors are fair graders and which electives actually help with your major or career plans. The kind of advice you won’t find in the course catalogue.
Or, don’t just blast your résumé everywhere. Ask alumni which companies are actually good to work for, which teams offer real mentorship, and so on.
As Student 9 described,
Human conversations can have the added benefit of, like, you can get knowledge that you weren’t really intending to get… Professors who really know their stuff can explain it and also connect it to different concepts. I don’t think ChatGPT can do that.
In other words, the informal networks that help marginalized students navigate university life quietly dissolve as AI replaces those human connections.
So while AI might make it easier to get quick answers, the cost is the slow erosion of mentorship and the invisible support social safety net that helps students thrive.
The Crumbling Social Safety Net
The paper’s findings are stark… As students’ social support systems start to crumble one by one in front of AI, students say they’re feeling more isolated and demotivated than ever.
For many, the shift is deeply personal.
Student 2, who uses genAI daily, admits,
I don’t really actively go out of my way to socialize with people… So if I’m relying more on GPT… If you’re alone, you might not even know about what’s out there…
Even the digital spaces that once kept students connected are fading.
Another student observes,
We used to in every class have a Discord. It used to be like a lot of people just asking questions about maybe like, a lab or a homework… I guess everyone’s just ChatGPT now. Like the new classes that I have now, we still have the Discord, but nobody really talks because most or all the questions are answered by ChatGPT.
Students are missing the opportunity to have a sense of belonging that makes academic life meaningful.
As the authors warn, traditional peer support networks are fracturing—cutting off collaboration, mentorship, and community—and this fallout hurts… everyone, AI users and non-users alike. It’s especially damaging for underrepresented groups, who depend on those ties for motivation and a seat at the table.
The Shame Spiral
While 76% of students used generative AI regularly, many grappled with lingering guilt. Student 4 observed peers treating AI like a dirty secret:
People definitely use it. They just don’t talk about it…[Professors] allow you to use it. It still feels like it’s wrong somehow.
Students shared fears of being judged as ‘lazy,’ ‘stupid,’ or ‘foolish’, and skepticism toward genAI users was common, with some describing reliance on these tools as a marker of being ‘less intelligent’
This tension between acceptance and unease runs deep.
Students described fearing judgment. Here’s what a 3rd-year student commented:
The stranger might look at you and see your failure…
So we ended up with a fractured social landscape where AI use is both ubiquitous and unspoken.
Like whispering about therapy in the 1990s, students navigate a world where the tool is technically allowed but culturally loaded—a paradox that reshapes norms faster than institutions can adapt.
This reminded me of recent enterprise reports about how employees are using AI secretly at work.
A 2024 Slack poll of 17,000 desk workers, 48% felt uneasy admitting AI use; Software AG found 50% rely on unapproved tools; and LayerX Security reported 89% of GenAI activity stays hidden—all to dodge judgment or the fear of being seen as replaceable.
More Questions Than Answers
Shall we play a game?
Think back to your first job. The one where you learned the actual work… not the bullet points in the manual, but the unwritten rules. Your mentor showed you how to read a client’s tone in an email. Your coworker who warned you about which VP’s “urgent” requests could wait. The frequent late-night pizza-fueled brainstorm where someone scribbled that idea on a napkin could be a waste of time.
Now, imagine a new hire today.
They’ve got Claude drafting emails, ChatGPT summarizing meetings, Microsoft co-pilot to search keywords for them in the company’s knowledge base, and an AI agent auto-responding to Slack.
The chances for them to decode passive-aggressive feedback or learn how to write elegant code are fading. Why would they ever learn the old-school (hard) way?
The bot handles it.
Are we raising a generation of brilliant prompt engineers… Any of the students would be answering assignment questions with Cursor/ChatGPT/Claude is your future employee who'll deliver code/ reports without understanding their fundamentals.
Especially the 2nd study, and as I said, there are so many similarities between students’ behaviours and those in the workplace. Some 2nd Order observations:
This Is No Different Than “Google It“?
We’ve spent decades optimizing for efficiency—faster internet, smarter tools, leaner teams. Google became a verb in 2002, and when ChatGPT was introduced, it also turned into a verb. When we Google, we read, sort, and judge information; but when we use ChatGPT, AI handles everything from search, sort, self-evaluation, and comes up with a report.
What skills are left for us to practice?The Mentorship Black Market
Students avoiding TAs? Professionals are doing the same. As I mentioned, between 40-80% of the enterprise GenAI usage happened off the books. Junior devs aren’t asking seniors for code reviews—they’re pasting errors into ChatGPT. Sales teams aren’t role-playing tough calls—they’re generating rebuttals via Claude.What dies when mentorship and peer review go underground?
The Illusion of Competence
Those polished AI outputs? They’re auto-tuned vocals. Students fear being seen as “lazy” for using ChatGPT. But the real risk is forgetting how to think without a digital crutch.Could your team spot a hallucinated statistic after outsourcing skepticism to the machine?
So here’s my proposition.
Next time you reach for your favorite AI sidekick, pause and ask yourself
What sparks might’ve ignited had you asked a real person?
Let’s argue about it over coffee. ☕ Here’s to a future of smarter, not just easier, learning.
Share this post