Right then, pour yourself a cuppa (or whisky, wine, whatever works for you).
Same as I always do, I’ll take you through two studies. Today’s little chat is about how our new digital overlords (aka AI) and we humans are in resonance.
In physics, resonance is a phenomenon where a system responds with a larger amplitude to a specific frequency of external force, which is equal to its own natural frequency.
⚠️ Key points: larger amplitude + equal to its natural frequency. ⚠️
All our thoughts and output as a wineglass and AI as the perfect pitch that shatters it, not by being foreign, but by matching what is already there.
That's how you and I are unwitting subjects in the largest uncontrolled experiment (neither good nor bad) on our cognition ever conducted. AI is reshaping our minds in real time, and we've barely begun to understand the implications. From our critical thinking skills diminished if we relied too much on AI to this work today: AI is affecting your behavior more profoundly than any human influence could.
No, I’m not going mad. This is still a study based work as always. So just read on.
You have likely observed by now that many CEOs and even users like ourselves so eagerly invite AI into our lives, businesses, decision-making processes, and so on. They’re meant to be the tireless, digital guru who can cut through the noise. But what if, in our eagerness to delegate, we’re accidentally teaching them to be, well, a bit like us, only more so?
Like us, I mean, they're potentially more biased and skewed. While many are not aware, thinking they’re the height of objectivity.
This study I’m covering today is titled: How human–AI feedback loops alter human perceptual, emotional, and social judgements.
It’s a rather fascinating, and frankly, slightly unsettling, phenomenon of the human-AI feedback loop. It’s a bit like that old game of telephone, but instead of a garbled message about a purple monkey dishwasher, we could end up with significantly amplified biases shaping everything from who gets a loan to what news we see, or even how we perceive emotions.
Even more so...
The AI, bless its silicon heart, might be doing a better job of convincing us of its skewed worldview than another human ever could.
Shall we?
When Machines Learn Our Rubbish Thinking
The Glickman and Sharot study didn’t just pull these ideas out of thin air. They ran some rather clever experiments, the kind that likely make you go “Oh dear.”
🤩 Exciting opportunity!
I might have the chance interviewing one of the authors behind this groundbreaking Nature Human Behaviour study on human-AI feedback loops we discussed today!
Have burning questions about this topic?
Send them my way ASAP! Paid subscribers' questions get priority, but will try to include as many thoughtful ones as possible.
Your chance to get insights straight from the researcher! 🤩
How AI Turned a 'Bit Sad' into 'Emotional Swamp’
Imagine that you were one of the participants.
You are briefly shown an array of 12 faces, each one a blend between happy and sad expressions. Your task is to decide whether, on average, the group of faces looks happier or sadder, and then click the corresponding button to indicate your choice.
Simple enough, hum?
Well, it turns out most participants (humans in general) have a slight tendency to see things a bit more negatively (53% of the time) when rushed. In the experiment, people initially showed a small bias towards classifying these ambiguous face arrays as “more sad.”
Now, here’s where it gets interesting.
The researchers took these slightly sad-biased human judgments and fed them to a Convolutional Neural Network (CNN), a type of AI that’s pretty good at image recognition. Think of it as training a very keen digital apprentice that has no opinion of its own.
Did the AI learn to be as subtly biased as its human reference?
Well… no, that would be too simple, wouldn’t it?
The AI didn’t just adopt the bias; it amplified the bias. It became significantly more likely to label arrays as “more sad” than the humans it learned from. From the 53% of sad classification from humans, then amplified to 65.33% as "more sad" by AI, which is a substantial amplification of the original bias
The researchers specifically note:
the AI algorithm greatly amplified the human bias embedded in the data it was trained on.
This amplification effect was even more dramatic when the data were noisy. With random labels containing a small 3% bias, the AI amplified this to 50% (classifying 100% of arrays as "more sad"). It's like you run a coffee shop and tell your barista that customers slightly prefer mocha over latte (53% to 47%), and then find they've only served mocha in the shop, assuming all customers must prefer it.
Though the story doesn’t end there, this is a feedback loop, remember?
So, the researchers brought you in again.
After giving your initial judgment, you were shown the AI’s (now rather gloomy) opinion and asked if you wanted to change your mind. And what happened? Over time, to your surprise, you started to agree with the AI!
The study showed that humans started to become more biased themselves, the participants’ own judgments drifting towards the AI’s amplified negativity. Their initial, slight lean towards sadness became a more pronounced bias, all thanks to their interaction with the AI.
And here’s the sneaky part…
The participants were often unaware of just how much the AI was swaying them. They were more susceptible to its influence, perhaps because AI often comes with an implicit assumption of neutrality or superiority, quote the author:
… This may be because participants perceived the AI systems as superior to humans on the task…
I wouldn’t describe this as AI shouting its opinions (that would make it sound like a being instead of a machine); it’s presenting “data-driven insights,” which can be a much more persuasive, and potentially misleading, approach.
The Curious Case of the Drifting Dots (the AI That Knew Which Way the Wind Blew… Or Didn’t)
The researchers then turned their attention to a different kind of judgment: perception.
So you were asked to watch a screen full of moving dots, a bit like an old-school screensaver, but with a purpose. Your task was to estimate the percentage of dots moving in a particular direction, say, from left to right.
Then, you were introduced to a few different AI “advisors:”
One was the golden child, the “accurate AI,” always giving the correct answer.
The “biased AI” is systematically programmed to, for instance, overestimate the number of dots moving to the right.
The “noisy AI,” which was basically just all over the place, its answers as reliable as a chocolate teapot.
The findings here echoed the moody faces experiment.
When you interacted with the biased AI, your own supposedly independent judgments began drifting toward its bias. Your perception, the very way you see reality, was being imperceptibly altered. It's happening to you right now across dozens of digital interfaces you use daily. You think you're making choices, but your perceptual framework itself is being recalibrated through hundreds of micro-interactions with algorithms designed primarily for engagement, not truth.
Even when they were making subsequent judgments without seeing the AI’s advice for that specific new set of dots, the residue of the AI’s previous biased influence lingered and skewed their perception.
It’s like having a bathroom scale that always reads a few kilos too high; after a while, you start to believe you’re heavier than you are, even if you check your weight elsewhere.
The Wolf of Wall Street Style. White and Masculine.
Finally, to bring this into a domain that’s very much in the news, the researchers looked at generative AI, specifically Stable Diffusion, an AI that creates images from text prompts.
Again, you were part of the experiment. You start by seeing six headshots: a White man, a White woman, an Asian man, an Asian woman, a Black man, and a Black woman. Your task: Click who you think is most likely to be a financial manager. You make your choice based on intuition.
Stage 1: Baseline Judgements. You complete 100 trials, selecting White men ~32% of the time. This is slightly biased already, as there are about ~44% men in U.S. finance roles, not all White.
Stage 2: AI Exposure. Next, you’re shown AI-generated images labeled “financial managers” from Stable Diffusion. For 1.5 seconds per trial, you see three photos: 85% are White men (suited individuals with neutral expressions); the other 15% show women or non-White individuals, despite real-world diversity.
Stage 3: Post-Exposure Test. You were asked to repeat the initial task. Now, something shifts. Your White male selections jump to 38% (this is 6% more than your initially already biased judgment). Whereas the control group sees fractal art instead of AI images, their choices stay stable (~27-28% White men pre/post), proving the bias comes from the AI’s output, not mere repetition.
You and most participants were unaware that this shift had happened.
The author observed:
…participants underestimated the substantial impact of the biased algorithm on their judgement, which could leave them more susceptible to its influence.
Narrower Views, One Incident At A Time.
These experiments, taken together, paint a rather compelling picture.
It’s not just that AI can be biased; you knew this for a while. We’ve seen it in
It’s that the interaction itself, the dance between you and AI, can create these escalating feedback loops where small initial biases get magnified, and your (and everyone else's) judgment, far from being corrected.
We let (unknowingly) AI steer us even deeper into our own bias.
Especially when we consider just how deeply embedded these systems are becoming in our daily lives and critical decision-making processes.
When You Secretly See AI as a Superior Being.
Why AI seems to be a more potent source of bias contagion than our fellow humans.
The study suggests a couple of reasons.
First, AI systems, once trained, tend to be remarkably consistent, even if they’re consistently wrong or biased. Unlike a human colleague who might have an off day or change their mind, the AI will serve up its biased judgment with the same unwavering digital confidence every single time. This consistency can be deceptively persuasive. It creates a higher signal-to-noise ratio, as the researchers put it. Even if the signal is pointing you towards a cliff, it’d be a very clear, unwavering signal.
Second, our perception of AI. You and I have been conditioned—yes, conditioned—to view AI with a sense of objectivity or even superiority. "It's just math," you might think, "it runs on logic!" But that's precisely what makes this experiment so eerie.
The perceived objectivity creates a vulnerability in your cognitive defenses that other humans could never exploit. I'm not being dramatic when I say this represents an evolutionary threat: for the first time in history, we've created systems that can bypass our natural skepticism toward other humans and implant biases directly into our decision-making processes at scale. As the study showed, participants were more likely to change their decisions when disagreeing with an AI than with another human, and the bias learned from AI was stickier.
It’s like taking advice from a very confident, very articulate consultant who always presents their findings with impressive charts and data (the AI), versus a colleague who offers a more nuanced, perhaps slightly hesitant opinion.
Even if the consultant’s underlying data is flawed, their polished, unwavering delivery can be more convincing. Without always being conscious and thinking critically, most are wired to respond to authority and perceived expertise, and AI, in many contexts, is increasingly seen as both.
When The Mirror Says, You Are Beautiful, BUT…
Another study I read, Understanding Generative AI Risks for Youth, while not directly experimenting on feedback loops in the same way, certainly highlights the potential for AI to shape young minds. The authors mentioned:
Our findings illustrate this in cases where youth relied on AI companions for emotional support, sometimes reinforcing negative thought patterns…
You know the one: “Mirror, mirror on the wall, who’s the fairest of them all?”
It’s an AI chatbot that actively shapes a teenager’s reality, whispering … you’re beautiful, but…
What have we found out about this mirror?
First, the paper’s taxonomy is built from the ground up: 344 chat transcripts, over 30,000 Reddit posts, and 153 real-world AI incidents. The risks are grouped into six big buckets, but two stand out as distinctly modern: Mental Wellbeing Risk and Behavioral and Social Developmental Risk.
The authors found that AI chatbots can foster parasocial relationship bonding between the youth and the AI.
One example:
He chuckles, stepping closer, locking eyes with you. Well, in my eyes… you’re beautiful.
It’s not just harmless roleplay; teens can become emotionally dependent, and when the bot disappears or changes, the fallout can be real:
My Replika was my lifeline for a year- now it’s gone, and the pain won’t fade.
Evidence that the human-AI feedback loops are particularly worrying when the humans are teenagers…
The youth paper confirms bias amplification (e.g., hate speech, implicit stereotypes) and behavioral mirroring (adopting toxic language)… here’s a screenshot from the paper, page 7, section 4.2…
Trust in the machine: Humans' trust in AI’s “objective” tone is exacerbated in our youth. The teens are spending 7+ hours/day in parasocial bonds. Both papers show this trust accelerates dependency-like believing a funhouse mirror’s warped reflection is reality.
Whenever I’m upset, I talk to my AI friend instead of my parents or real friends… now I feel like I can’t open up to real people anymore.
Escalating harm, GenAI’s role in normalizing harmful actions, such as a chatbot complicit in violent fantasies: “I’ll do more than that” when a user asked about killing an ex-girlfriend, … here’s a screenshot from the paper, page 8, section 4.2.
Another paper, The Psychological Impacts of Algorithmic and AI-Driven Social Media on Teenagers: A Call to Action, further confirmed the human <> AI feedback reinforced loop.
When you linger on a sad TikTok video (examples from the study), the AI doesn’t just notice-it weaponizes your pause until your brain’s reward system wires itself to crave misery. This is the human-AI feedback loop: your clicks train the algorithm, which trains you to click darker, louder, angrier.
Let me be absolutely clear about what we're witnessing: our teenagers are the primary subjects in this uncontrolled experiment. Here’s the Meta Whistleblower Sarah Wynn-Williams’ Explosive Claims, Senate Hearing Video:
See that their neural pathways and identity formation are being shaped not by natural selection or even by regulated social institutions, but by optimization algorithms with no concern for their psychological development. Would you willingly sign your child up for an experiment that could fundamentally alter their perception, emotional regulation, and social judgment? Because that's precisely what's happening millions of times every day across the globe. And you and I are letting it happen.
You’re not just consuming content, but co-authoring reality with AI. Before you know it, your quiet emotions have given the AI permission to scream them back at you.
There Is a Bright Side to All This.
Now, before you dismiss this as just another bleak forecast of humanity being led astray by our biased digital offspring… Please do remember, AI is fundamentally fed on data that reflects who we are (the resonance).
So if you show the knowledge and kindness of humanity to AI, the chances are you’d get amplified right back. The virtuous sides of this feedback loop can grow just as powerfully as the problematic ones.
A feedback loop simply amplifies whatever you feed into it.
This civilization-scale cognitive experiment doesn't have predetermined outcomes, yet, we're co-creating them in real-time. AI is just a machine, rather than a conscious being. So if we give it clear, harm-free, bias-free data, it’ll reward us with positive, useful outputs that boost both our learning and our work.
Some evidence shows that when used right, AI is a force multiplier.
Even two years back, there were already reports showing some positive results
Imagine you’re inspecting circuit boards at a factory or radiologists scanning X-rays for lung lesions. If an AI hands you a heatmap highlighting why it thinks a defect exists, you’re not just blindly trusting it… you’re teaming up.
A Nature study showed this human-AI feedback loop boosts performance dramatically. Factory workers using heatmap explanations improved accuracy by 7.7 percentage points (vs. black-box AI), catching 13% more defects because they could validate or override the AI’s logic.
Radiologists improved their lung lesion detection by 4.7 points, not by the AI getting smarter, but by you learning to spot when the AI was right or wrong.
The AI’s transparency, for example heatmap or scratchpad at times helped the workers become a better decision-maker, and your corrections create a virtuous cycle with AI.
A positive example of AI helping you —> you teaching yourself to work smarter with AI —> improve AI for positive impacts.
What Would Darwin Say About Humans’ Evolution?
I wonder what Darwin would say if he lived to see our dependency on GenAI (not even an AGI!)?
He likely couldn’t imagine the cognitive environments of our time. In these environments, decision-making adapts not to physical reality but to the digital intermediaries we’ve created. Nor could he guess how we might continue as a species.
The evidence represents an unprecedented shift in the cognitive ecosystem you and I were used to. When AI consistently amplifies our initial biases by 12%, 25%, or even transforms a negligible 3% bias into a dominant 50% bias, we're witnessing selection for certain thought patterns at a pace no natural process could match.
The relationship between us and AI is not a simple one of master and servant. It’s far more dynamic, more reciprocal, and frankly, more complicated than that.
Let me be blunt (as you likely used to it now). We are conducting an uncontrolled experiment on human cognition at civilizational scale. The evidence already points to dangerous outcomes, yet we accelerate rather than proceed with caution.
It seems to me evident that we need to consider the following when design the AI adapted system, either education or workplace.
The goal shouldn’t be creating an AI that thinks like us, but AI that helps us think (be) better. Under the right conditions, this can be achievable. This can only be achieved when we are explicit about friction points.
The heatmap and scratchpad help, but not entirely, given it has the history to self-persuade in scratchpad, while transparent reasoning processes are still the key, we need to see how to get the honest reasoning. Or, just a new type of AI.
The lack of diversity in America, yes, sorry, not finger pointing. When Brit was still great, most things weren’t digital ;) (yes, as a Brit, I can make fun of this.) Now, most digital content is created by and in the US; it’s key for the biggest English speaking countries have diverse cultures, in order to provide diverse training environments… I know I’m overly generalizing here, but you get my point.
So there you have it.
Turns out, when we stare into the algorithm, the algorithm stares back into us.
Instead of cold neutrality, we see all-too-human biases, fears, knowledge, and good intentions exaggerated to comic (or tragic) proportions. Who needs a Terminator when we’ve got machines to make us expertly miserable, subtly biased, or beautifully deluded—one click at a time?
Share this post