2nd Order Thinkers.
Jing Hu's AI Breakdown → 2nd Order Thinkers.
Why Thinking Hurts After Using AI?
0:00
Current time: 0:00 / Total time: -17:25
-17:25

Why Thinking Hurts After Using AI?

I summarized four recent research papers to show why and how AI erodes our critical thinking ability.

Look, I'll admit it: I messed up.

AI outputs can disappoint, but it's a two-way street. Yes, the models hallucinate and have their off days. But the quality of your prompts matters - and sometimes, I get lazy. Seduced by AI's convenience, I'd rush through tasks, sending unchecked emails and publishing unvetted content.

I try my best to triple-check everything now. But those moments of exhaustion? Millions of years of evolution didn't exactly equip humans with the robotic consistency AI can achieve.

This research from Microsoft sent a shockwave suggesting that frequent AI usage is actively reshaping our critical thinking patterns. And some groups will bear the brunt of this shift more than others.

A 2023 paper saw this coming, highlighting two skills that would become essential in the AI era. Take a guess.

Critical thinking and science.

Not coding. Not data analysis. Not even AI engineering. But the fundamental human capabilities separate strategic thinking from mechanical execution.

In this piece, we'll examine how Gen AI quietly reshapes our cognitive landscape, using the latest research to map this transformation. But more importantly, we'll confront the second-order effects that nobody's talking about.

Because in our profit-obsessed world, who's thinking about the widening skills gap? Will business owners prioritize this issue? Or are we sleepwalking toward a future where we're eroding the very capabilities that make us human?

Shall we?


Skills That Make You Irreplaceable

So, we've established that AI is shaking things up. But what does that actually mean for your job, your skills, and your future?

Researchers at OpenAI and the University of Pennsylvania decided to dig into this very question in their paper: An Early Look at the Labor Market Impact Potential of LLMs.

They didn't just guess, of course. They took a massive database of jobs and the tasks those jobs involved (called the O*NET). Then, they asked both humans and GPT-4 to rate how much each task could be sped up by using AI.

They focused on evaluating individual tasks instead of an entire job. Think of it like this: Could AI help you check grammar mistakes, even if it couldn't write the whole report?

This table is where things get really interesting (and relevant to our topic today). Think of it as a cheat sheet revealing which skills will become less valuable and which will become your superpowers in the AI era.

Let's break it down, plain and simple.

Think of the numbers in this table like this (we'll focus on the "β" column, which is a good middle-ground estimate. And I’m ):

  • Positive Number (like Writing's 0.467): The more a task relies on this skill, the more likely AI impacts it.

  • Negative Number (like Science's -0.230 in the β column): The more a job relies on this skill, the less likely AI will impact it. It's like saying, "The more a day-to-day task requires scientific reasoning, the safer this task is from direct AI impact."

  • A Bigger Number (either positive or negative, just further away from 0): Indicates a stronger, more predictable relationship between how important a skill is to a job and how likely AI is to impact that job.

Let's look at some key skills and their scores:

  • Writing (0.467): Big, positive number = a huge red flag. Tasks that involve a lot of writing are highly likely to be affected by AI. Think content creation, report writing, or crafting emails, i.e., tasks you are likely already assigned to AI

  • Programming (0.623): Even bigger positive number! If your job involves coding, well… you’ve been using Github Copilot or Cursor. So you know the best. This doesn't mean programmers are obsolete; we will discuss this in the next section.

  • Critical Thinking (-0.196): Negative number. Jobs requiring critical thinking – analyzing information, making judgments, and solving complex problems without clear-cut answers – are less susceptible to AI's impact. As I said before, AI can generate text; it can't (yet) truly think.

  • Science (-0.230): Another negative number! Jobs relying heavily on scientific methodology, experimentation, and deep domain expertise are relatively safe. AI can help with data analysis, but it can't replace the thinking bit.

It's not about "high-skill" versus "low-skill" tasks but the skills that make humans human.

Whereas skills that involve routine, repetitive tasks, even if they require training (like basic coding or writing formulaic reports), are the ones most at risk.

Yet, there's a brutal irony emerging. The very tools helping us work 'smarter' are quietly eroding our most valuable cognitive defenses.

Let's examine the evidence.


Trading Brainpower for AI Efficiency

The skills landscape is shifting.

Yes, critical thinking, scientific reasoning, and complex problem-solving are becoming your armor in an AI-driven world.

But what does this actually mean in practice? How is Gen AI changing how our minds work, and what are the trade-offs?

Before we dive deeper, I want you to try something. Open up your favorite AI tool – ChatGPT, Gemini, DeepSeek, or whatever you use. Give it this prompt (tweaked for your specific role):

I need to analyze the critical thinking requirements of a [YOUR JOB TITLE] role.

First, generate a comprehensive list of typical daily and weekly tasks for this position, based on standard industry expectations.

Then, analyze each task and assign a "Critical Thinking Score" (0-100%) based on how much it requires:
- Analysis of complex information
- Independent judgment
- Problem-solving without clear solutions
- Strategic decision-making

Format output as CSV with columns:
Task, Critical_Thinking_Score, Reasoning

Sort by Critical_Thinking_Score in descending order.

Go ahead; I'll wait...

Does the result match your own assessment? Regardless, it’s a good sheet to keep track of. This mini-exercise highlights the core dilemma we're about to explore: the double-edged sword of AI.

The Irony. AI is Depriving Critical Thinking

ChatGPT was launched two years ago. Since then, research labs have been mapping a troubling trade-off: efficiency vs. thinking capacity.

I've analyzed three key studies (also requested interviews with all authors) that expose this pattern:

Gen AI tools are undeniably powerful.

The "GPTs are GPTs" study, for example, found that LLMs could complete, on average, 15% of all worker tasks significantly faster at the same level of quality, just with access to an LLM. And with some additional helper tools, this increased to between 47% and 56% of all tasks. That is a massive boost! The "AI Tools in Society" paper also concludes that AI offers "enhanced efficiency and unprecedented access to information."

But there's a catch.

Some studies identified an urgent issue.

They found a strong negative correlation (-0.68) between AI tool use and critical thinking skills. Ie. The more often you use AI tools, the less critical thinking is involved.

Know someone who’s almost as smart as you? Share this so they can catch up ;)

Share

The “Impact of Generative AI on Critical Thinking”, highlighted in the conclusion:

Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work. It can potentially lead to long-term overreliance on the tool and diminished skills for independent problem-solving.—

Take a moment to reflect:

  • When was the last time you truly wrestled with a problem? The one that you needed to take a deep breath and stay focused to complete.

  • How often do you verify the information AI provides?

This is less about AI. But for people like you and me, it's our over-reliance on AI.

Less of What You Do; More About How You Do It.

Forget the outdated idea of robots stealing jobs wholesale.

The shift is subtle yet profound.

It's a change happening on an exponential scale that perfectly aligns with Gen-AI tools’ adoption rate. Yet so gradually that many of us haven't even noticed. We've been unconsciously adapting to a new way of working.

Think about your day-to-day.

  • Are you spending more time editing AI-generated drafts, from emails to reports?

  • Still building reports from the ground up, or are you focusing on refining AI's analysis?

  • Coding every line yourself, or are you verifying Copilot's suggestions and integrating them into a larger project?

This isn't about automation replacing workers entirely. A study calls this a move "from material production to critical integration." You're becoming less of a creator and more of a steward, a verifier, and a curator of AI-generated output.

The AI can generate text; it can't (yet) apply the nuanced judgment needed to make that text truly effective and relevant. Hence, critical thinking comes into play. It allows you to evaluate the quality of AI's output, identify biases, spot inaccuracies, and integrate that output into a larger, more complex context.

I reshuffled the order of these capabilities mentioned in the study so it becomes a mini framework that you can check against your current workflow:

  1. Task Stewardship: Many got this wrong. Questions:

    • How often do you have a clear goal in mind when using Gen AI?

    • Can you define AI’s limitations clearly and know when to take over?

  2. Information Verification: Can you distinguish between reliable information and AI-generated hallucination?

  3. Response Integration: How quickly and accurately can you take a piece of AI-generated content and seamlessly weave it into your own work?
    Simply copy-pasting won’t cut it. You need to judge whether the output meets your goal and then adapt the output to the final result.

Let's take the software developers’ role as an example. I wrote a piece last year about whether AI boosts developers’ productivity:

Combine this article with the findings from the "Widening Gap" study. Most senior developers follow the same three steps as I mentioned above. They understand the architecture and where a task fits in; then, they use the Gen AI tools to help them complete a small piece of work; finally, they integrate it into the existing pool.

Whereas newbie programmers using GenAI faced the following metacognitive difficulties:

  • Interruption: Constant AI suggestions disrupted their thought process.

  • Mislead: AI led them down the wrong path, providing incorrect or unhelpful code.

  • Progression: They struggled to understand the underlying principles, even when the AI provided a working solution.

So you see, the criteria to get a job in the future right now is even more demanding than ever.

But how confident are you that you aren’t over-dependent on AI? What about those who are early in their careers? Are they falling into a trap? Overconfidence in AI, fueled by inexperience and, yes, a bit of human laziness, is creating a widening gap.

Studies are already seeing the cracks.

More AI Usage = Less Thinking?

A hidden danger lurking beneath the surface: a false sense of security. A dangerous disconnect between how good we think we are at using AI and how effectively we're actually using it.

All these studies uncovered a chilling "confidence paradox." The more confident people were in AI's abilities, the less likely they were to engage in critical thinking.

Two tables from separate studies explained this paradox the best.

I want you to imagine that you're driving a car with a highly advanced autopilot system. This system can handle almost all aspects of driving. However, you, the driver, are still ultimately responsible. I categorized ‘drivers’ into two groups: those with strong critical driving skills and those with weaker ones.

Table 4: Non-standardised coefficients of the mixed-effects regressions modeling— The Impact of Generative AI on Critical Thinking

Drivers WITH Strong Critical Thinking Skills:

  • Experienced, reflective drivers, even with autopilot, constantly monitor the road and the system's actions, ready to intervene. (0.52***, Tendency to reflect)

  • Confident, skilled drivers, even with autopilot, remain engaged, ready to take over if their skills are needed. (0.26*, Confidence in self)

  • Drivers who are confident in judging when autopilot might be wrong are more likely to step in and correct it. (0.31*, Confidence in evaluation)

Drivers WITHOUT Strong Critical Thinking Skills:

  • Drivers who trust the autopilot and believe it can handle anything are less likely to pay attention, potentially missing crucial errors. (-0.69***, Confidence in AI)

Similarly, data from another study proves the exact same thing.

Continue with our car and the advanced autopilot analogy. This table explains the relationship between a driver using the autopilot overall ("AI Tool Use") and how much they rely on it specifically for making driving decisions ("Cognitive Offloading").

Table 5. Correlation matrix. — AI Tools in Society.
  1. AI Use ↑, Cognitive Offloading ↑ (r = 0.89): More autopilot use strongly leads to more reliance on the system. Hence the very strong positive correlation of 0.89.

  2. AI Use ↑, Critical Thinking ↓ (r = -0.49): Frequent autopilot use is associated with a decline in core driving skills. The negative correlation of -0.49 reflects this.

In short: More AI Use→ Cognitive Offloading increase → Critical Thinking decline. i.e., the more drivers trusted the AI, the less attention they paid to the road.

It's not a surprise that we're happy to outsource our thinking. We're letting AI handle tasks that we could do ourselves but choose not to.

If AI is the GPS, are you learning the route or just following the turn-by-turn directions?

How Gen AI Widen the Skills Gap?

It turns out that experience is playing a bigger role than ever, and that's creating a widening gap. Of course, this is not a guarantee; you still need to be able to perform critical thinking.

Anyway, imagine you've just graduated and landed your first job, and you're eager to prove yourself. But you're also, understandably, lacking in experience. Your senior colleagues, on the other hand, have been there and done that.

They've seen things go right, and more importantly, they've seen shit hits the fan and did the late-night cleanup (not literally, of course). So, the seniors developed a gut feeling – an intuition – for what works and what doesn't.

This is what Marvin Minsky called "negative expertise," and it's incredibly valuable.

Now, throw GenAI into the mix.

For the experienced worker, Gen AI’s potential is exponential. They use it to accelerate tasks they already know how to do. They can quickly spot when AI is going off the rails because they have that "negative expertise" – they've seen similar mistakes before.

But for the novice, AI is a minefield. They might be tempted to rely on it too much, accepting suggestions without fully understanding the underlying principles. The newbies are more likely to fall into what the "Widening Gap" study called "drifting" – aimlessly switching between AI suggestions and making little real progress. They lack the mental model, the framework, to effectively guide the AI.

The researchers observed that many novice programmers using GenAI tools exhibited "metacognitive difficulties." That's a fancy way of saying they struggled to think about their own thinking. The novice programmers were:

  • Interrupted: Constantly distracted by AI suggestions, breaking their concentration. One participant said, "These prompts are distracting sometimes" and "I’m trying to think of...never mind, wait".

  • Misled: Led down the wrong path by incorrect or unhelpful AI suggestions.

  • Stuck in a Loop: They struggled to understand why the AI-generated code worked (or didn't work), revealing a lack of foundational knowledge.

These novices weren't necessarily lazy. They were often genuinely trying to learn. However, they lacked the experience to effectively filter and integrate AI's output.

Many struggling programmers using AI thought they understood the code better than they actually did, even when it was wrong. The AI's help tricked them into feeling confident, making it harder for them to realize they were making mistakes.

The Education Gap (A Quick Note):

It's not just about years on the job. Education level plays a role, too. A study found that participants with higher educational attainment were more likely to cross-check AI-generated information.

People with higher education are more skeptical and, subsequently, apply critical thinking. This suggests that formal education, with its emphasis on analysis and evaluation, might provide some protection against AI over-reliance.

The risk of deskilling specific groups in society is a genuine concern. Failing to develop the foundational knowledge and critical thinking abilities will further increase the difficulty for them to succeed in the long run. =


Questions To Save You From Sleepwalking Into the Future.

Free stuff is great. But quality analysis requires sanity. Sanity requires groceries…

So, I've laid out the looming threat: AI-powered efficiency erodes your critical thinking. However, the real question isn't just if this is happening but where you stand – and what the fallout will be.

This isn't some abstract academic debate (even though I referenced multiple studies). This is a chasm cracking up in your and everyone else's career.

Here's my blunt, no-sugarcoating inference of what will happen in the commercial world:

The "Easy Job" Paradox

You probably agree with me about the importance of critical thinking, but let's face it: many roles don't explicitly reward it.

Maybe it's dev, marketing, customer support, or whatever your current job is. Sure, some people in their roles are strategic geniuses. However, many follow processes, execute tasks, and rely on existing frameworks.

If your employers are happy for AI to do 80% of your tasks (and you “monitor”), and critical thinking is just a "bonus," where's the incentive for you or your employer to develop it now?

The Incentive Mismatch (My Biggest Worry)

Will your company actually invest in upskilling you if your critical thinking fades because you're leaning on AI for efficiency?

Oh God, yes, I have seen upskilling programs in enterprise. The resource distribution is uneven, and there is a lack of consistency in quality.

Or maybe it will be cheaper, faster, and, frankly, easier to hire from the shrinking pool of critical thinkers – and let the rest figure it out? I'm a realist. Profit trumps long-term employee development. It's just business.

Can You Train Someone To Think Critically?

There are frameworks, sure.

If you write yourself as a critical thinker, I want you to think about this:

Did you learn critical thinking from a corporate training session? Or did it come from experience, from wrestling with messy problems, from a scientific approach – things that are damn hard to replicate in a two-hour Zoom call or a 1-day workshop?

I got it because of my STEM background, my career as a technologist, and all those evening debates on strategy with my partner.

The Education Echo

Higher education seems to correlate with better critical thinking.

Is that because of the education itself or the kind of person who gets that education? And do people coming from STEM actually have an edge? If yes, what should people do to close that gap?

The "Experience" Illusion

Yes, experience matters, especially that "negative expertise" – knowing what doesn't work.

But what if your experience was in a role that actively discouraged critical thinking? Years spent following procedures, executing someone else's vision, and letting AI handle the analysis won't magically turn you into a critical thinker. "Experience" alone isn't your armor; it's the kind of experience that counts.

And I am asking you to examine yours.

Final Thoughts

Again, I'm not against AI. I'm all for progress.

However, I enjoy critical thinking and research too much for AI to take it away. I see no point in living if I can no longer think independently one day.

Are you sleepwalking into a future, or do you prefer to take control of your own faith? What is the long-term impact on our brains when we stop thinking critically? If you are the same as me, what’s your plan to keep your mind sharp?

But I hope my questions can help you recognize the unintended consequences of a technological revolution. Better to wait until that chasm becomes too broad for anyone to jump across.

Because, frankly, no one else will do it for you.

Discussion about this episode