AI will be an integral part of our lives, potentially having a more profound impact than our childhood friends, mentors, or even those closest to us, such as our parents. So subtle that you only noticed when you were one day locked out of your device because you were late with your payment.
Only then would you finally realize how much AI has reshaped your connection with the rest of this world.
Being Human in 2035 is a report from Elon University, by Janna Anderson and Lee Rainie, who’ve spent the last 20 years tracking how tech changes what it means to be human. (I’m actually scheduling an interview with Lee Rainie soon.)
Nobody is better equipped to tackle these questions than they are. For this report, they pulled together an invitation-only panel of 301 thought leaders from 14+ industries, everyone from government to tech to media.
… “282 pages!” you’d likely say to me.
I know it is intimidating.
That’s why I’m breaking it down for you.
Give me 20 minutes a week for the next three weeks, and you’ll get an eagle-eye view of how AI might change us in the next decade.
The big questions this report answers:
Will AI change what it means to be human by 2035? Better, worse, or just... different?
Short answer: Most experts think the impact will be a mixed bag — equal parts better and worse. Only a few expect dramatic improvement or disaster. (16% “better,” 23% “worse,” 50% “both.”)Which parts of our humanity are most likely to shift with AI? Our brains, our hearts, our sense of self… or all of the above?
Short answer: The report asked 300 experts to rate 12 core traits. Range from empathy to creativity to mental well-being, the overall belief is that most will take a hit. But a few (like curiosity, decision-making, and creativity) might actually get a boost.How much will being human change by 2035? Is it a revolution or just a tweak?
Short answer: A whopping 61% predict “deep and meaningful” or “revolutionary” changes.What is going to feel like to live as a “human-plus-AI” in 2035? When AI is your sidekick, coworker, therapist, and maybe even friend?
Short answer: Expect new freedoms, new dependencies, and big questions about what still makes us uniquely human.
You are reading part 1, Alone, Together.
Our ability to read feelings, show empathy, and build real relationships is likely to take a hit. It’s not because we woke up one day and forgot how to care, but AI-mediated interaction is just so simple compared to messy human relationships.
Why put up with all the hassle, compromise, and messiness of real relationships?
I’m going to walk you through four key traits that help us connect with friends and family, traits that experts say AI could damage: social intelligence (+EQ), empathy and moral judgment, mental well-being, and, yes, our basic sense of trust.
Shall we?
What About Our Mental Well-Being?
Can an AI friend alleviate loneliness or exacerbate it? Is a digital buddy enough when what we really crave is to matter to someone else?
Even some experts see potential upsides; AI therapy bots or social apps might help ease isolation for a few. Still, 45% of the 300+ experts foresee overall harm to mental well-being as AI becomes prevalent, whereas only 14% see net benefits.
Most of you are aware (even if you can’t always spell it out) that your happiness is tightly entwined with things like real friendship, genuine relationships, feeling in control of your own life, having deep emotional experiences, wanting a life that makes sense, and craving both quiet time and simplicity.
The worry is that AI could nibble away at all the above.
For one, if AI handles so much of life’s decision-making and labor, people might feel less needed by society.
Otto Barten from OECD observes, “Many people‘s happiness is at least partially derived from their sense that the world somehow needs them, that they have utility. I think AI will likely end that utility.”
Others also warn that widespread AI could foster a kind of dystopian malaise.
By 2035, if AI systems keep getting better without limits, they could seriously shake our sense of control and free will. Always measuring ourselves against these machines might leave people feeling inadequate, powerless, or even worried that their minds aren’t keeping up.
One said, “At a more profound level, our deepening dependence upon AI may lead to experiencing a loss of individuality and uniqueness, or a loss of self, as well as a loss of control over one‘s own life“
An “addiction to AI” for companionship, information, and entertainment that leaves people emotionally hollow.
Paul Saffo, a futurist in Silicon Valley, says psychologists will soon be alarmed by humans forming deeper bonds of trust and friendship with AI companions than with their own families and friends.
As AI native users, children are much more likely to be attached to virtual buddies that they ignore their peers, or adults retreat into AI-fueled isolation. It will be a global phenomenon of hikikomori, people who disappear into extreme solitude, spending all their time with vivid digital companions.
If Alan Turing were alive, how would you imagine he’d say about where AI is heading?
Not simply because of what he invented, but he was fundamentally a humanist, which gave him the honor of being on the £50 note. For someone like him who possessed strong humanistic values rooted in reason, logic, and human dignity… I can’t imagine how disappointed and what kind of sadness he’d feel.
The very tools meant to empower our lives (since engineers started to imagine AI), at least that’s what they're meant to do, could, now showing the tendency, leave us more alone altogether than ever.
What’d AI Do To Our Social Skills & EQ?
Character.ai (where you can create and interact with AI characters that mimic the personalities of real people or fictional characters) generates 180 million monthly visits, with an average visit time of 17 minutes, compared to just 7 minutes per ChatGPT session.
Half of these million users are under the age of 24.
What were you doing when you were 24? Do you have your social skills as you have them now, or are you still absorbing how others interact, like a sponge?
What is the chance our young generation is much more equipped to communicate with AI and loses their edge at reading the real, unpredictable messiness of human emotion?
Half of the experts expect humans’ social-emotional intelligence to decline by 2035, versus only 14% who anticipate improvement.
So… If we start treating AI chatbots as companions, what happens to our ability to truly connect with other people?
As Internet Hall of Famer Henning Schulzrinne noted, AI substitutes (using AI as a replacement for human lovers or friends) might further weaken our “social muscle,” feeding the epidemic of loneliness in society. He also pointed out: AI is more ‘efficient’ than human interaction, with fewer disappointments than online dating… Bots do not require, foster, or reciprocate real-life temperance, charity, diligence, kindness, patience, and humility. Indeed, they will likely tolerate and thus encourage self-centeredness and impatience.
Reminded me of the Little Prince…
“One only understands the things that one tames,” said the fox. “Men have no more time to understand anything.
They buy things all ready made at the shops. But there is no shop anywhere where one can buy friendship, and so men have no friends any more.”
When there are so many AI chatbot options out there, you have the choice to vent to a bot when you’re having a rough day or just want to hide from the world. More than just time saved, an AI friend will also never demand compromise or challenge your selfish impulses.
Why bother with real, two-way relationships, knowing how much to handle the messy, unpredictable human friendships properly, when your AI BFF always finishes your sentences and seems to “get” you perfectly?
However…
Isn’t this exact mess and the time spent together an innate burden to have before you taste the fruit of friendship?
Just like what the Little Prince said…
It is the time you have wasted for your rose that makes your rose so important.
What If Our Empathy And Moral Judgment Are Lost?
If AI handles all our dilemmas, do we stop practicing empathy? When conflict and compromise get offloaded, what happens to our sense of right and wrong?
45% of the experts predict declines in human empathy and ethical reasoning by 2035.
LLM is constantly being trained to mimic empathy and kindness in conversations.
Some optimists imagine planetary empathy, where AI translates the needs of the planet and other species in ways we can finally understand. I admit, this is a beautiful idea. I would love having the chance to understand dolphins and whales, even just listening to them complain about how much humans have messed up the Earth.
The positive changes brought about by AI can happen for those who seek them. But it won’t happen by default.
The default, as of now, is heading toward a more fragmented, individualized social experience.
More services are using AI quietly, making judgment calls based on data.
From hiring decisions to courtroom sentencing, and even to provide emotion in customer service. You will likely see more companies attempt to implement AI to aid in decision-making, in the name of efficiency.
Making decisions is hard.
With a sprinkle of FOMO, you’re always choosing one thing and giving up another, even if it’s just the fantasy of a better option. Tell me, when was the last time you couldn’t decide whether to order a Caesar salad or a burrito? Ordering lunch can feel like you’re defusing a bomb (at least for me).
As AI mediates more of our interactions and choices, people may stop engaging in the hard work of decision-making… I’m not talking about lunch or date decisions now… what is at stake is when a decision involves moral judgements.
Why suffer the headache or moral burdens of an ethical dilemma when an AI can calculate a solution of who to hire, who to fire, who to promote, who to demote, who to send to prison, and who to monitor based on data?
So we can slowly distance ourselves from making moral judgment calls, then what?
If you rarely have to consider another person’s perspective, because your AI steps in to smooth out every conflict. You might end up losing the habit of caring altogether. When we let technology handle all the messy, human stuff, we risk forgetting what real empathy looks like.
Until we become numb and barely notice what’s missing.
Maintaining our humanity will likely require conscious effort to keep those moral muscles in shape.
At best, people might get lazy about thinking through tough moral questions. At worst, we could lose our ability to tell right from wrong, and stop feeling responsible for our own actions that impose on others.
As the UN advisor Giacomo Mazzone mentioned…
AI will be used by many people to take shortcuts to making moral and ethical decisions while leaving them in the dark about how those decisions are made. AI is already leading to the fragmentation and dehumanization of work.
Unless we consciously counteract this from happening, AI will gladly help us all stay in our bubbles, where we never have to meet in the messy reality.
Trust Crisis.
(What happens when we trust machines more than our own kind?)
Humanity runs on trust.
This is the sole reason the early financial system, culture, and religion developed and thrived.
If there’s no trust, how much chicken would you actually trade for rice that won’t be ready until the end of the year, back when there wasn’t even such a thing as money? How would you sleep at night, always wondering if your neighbor is a big, bad wolf that can huff and puff down your straw wall?
Think that sounds ancient? It’s the same story behind the gold standard, government bonds, or even Klarna, the “buy now, pay later” you use for shopping. All of it runs on trust.
We trust that our teacher will pass down the essential knowledge they have, and we trust our business partners, consultants (such as gym coaches, life coaches, and priests), etc.
Yes, there were other essential factors. But what if trust were entirely absent?
By 2035, most experts believe that trust will be on shaky ground.
Not because humans suddenly became less trustworthy, but our new AI oracles can be too persuasive for our own good.
Nearly half the experts expect a mostly negative impact on trust, shared value, and cultural norms as AI advances.
We are living in a film of increasingly polarized, fragmented societies, where malignant actors use AI to undermine shared norms for profit or power. When everyone is fed a different algorithmic reality, basic agreement on truth erodes.
“Deepfakes have already put a big dent in reality, and it’s only going to get worse,” tech analyst Jerry Michalski wrote. In domain after domain, it may become “impossible to distinguish between the natural and the synthetic”.
Political propaganda, fake reviews, and bogus social media personas… what was once confined to the Cyberpunk world is slowly creeping into the real world. AI can scale deception at levels human fraudsters never could.
Even our values and norms feel less solid.
What happens when different people’s AI assistants each serve up different moral guidance based on their users’ preferences? One person’s AI might normalize behavior that society used to condemn, simply because it optimizes for that user’s engagement. We could see a fracturing of commonly held values, with each of us believing our AI’s take is the reasonable one.
Let’s use shared facts as an example.
Most people in a country still get their facts from a few common news sources. In 2035, facts are cheap and plentiful, and each of us receives a custom feed (which is slowly becoming a reality).
If you have a bias or a suspicion, your AI can feed it with a stream of “evidence” (some real, some cherry-picked, some entirely fake) that confirms your view. Another person’s AI might show them the opposite. Reality splinters.
As one expert put it, “multiple boundaries are going to blur or melt… the boundary between reality and fiction… between what we think we know and what everyone else knows.”
We risk a future where it’s nearly impossible to agree on basic truths, because each person’s trusted AI paints a different world.
This will only exacerbate when we are in a world where AI is more persuasive than humans.
Remember my work on AI Wins 64% of Debates Against You? It is apparent that many of us started to trust AI much more than our own kind. AI sounds authoritative, and we are drifting into a culture of “AI knows best,” where many will stop questioning outputs that align “too perfectly” with our preferences.
AI will master that art. It will speak in whatever voice or style comforts you. Imagine an AI news anchor with your grandmother’s calm demeanor, or a debate coach that uses your favorite sports analogies. It’s coming.
Experts worry this could lead to a dangerous “faith-based” epistemology.
People believe AI outputs with quasi-religious fervor simply because they’ve lost the habit of critical scrutiny. When rational debate is replaced by “The AI said so, end of discussion,” our capacity for collective decision-making erodes.
And honestly, this isn’t anyone’s fault. I used to get paid (like tens of thousands of others) just to figure out how to design software that’s as friendly and addictive as possible.
Given how real AI can generate a video like Veo 3, we can no longer trust our own eyes and ears. “One of the most important concerns is the loss of factual, trusted, commonly shared human knowledge,” warns Giacomo Mazzone, who fears that AI-driven misinformation and deepfakes will accelerate this breakdown.
In such a scenario, skepticism could become the default state of mind… or worse, people might choose “convenient truths” and tune out the rest. Our society’s collective grip on reality becomes more tenuous, which is a recipe for division and dysfunction.
In the future, determining whom and what to trust could turn into one of the defining personal skills, separating those who survive from those who fall for the fakes.
There’s a scenario where people react by becoming hyper-skeptical of everything not verified in person.
Without shared truths, no society lasts.
History doesn’t offer a single example of a culture surviving for long once common ground disappears.
Beneath Data
Even as AI becomes all-encompassing, lines of code, mountains of data, what’s underneath it all is still us, for now.
Human experience, creativity, struggle, and care are the very sources of our creations, which are then used to train AI. Every dataset, every breakthrough, every glitch is rooted in someone’s story.
The more AI-generated content out there, the more we need to hold dear to and cherish our emotions, our relationships, the agony, the joy, and the flaws of being humans.
Allow me to recite a paragraph from Jane Eyre,
“Do you think I am an automaton?
A machine without feelings? and can bear to have my morsel of bread snatched from my lips, and my drop of living water dashed from my cup?
Do you think, because I am poor, obscure, plain, and little, I am soulless and heartless? You think wrong!
I have as much soul as you and full as much heart!
And if God had gifted me with some beauty and much wealth, I should have made it as hard for you to leave me, as it is now for me to leave you. I am not talking to you now through the medium of custom, conventionalities, nor even of mortal flesh: it is my spirit that addresses your spirit; just as if both had passed through the grave,
And we stood at God's feet, equal …As we are!“
This is just Part 1: Alone, Together. Knowing half the experts predict a doomsday scenario, maybe regulation will help, maybe not.
For someone who trusts small government and the free market, it is at times hard to see what regulations can do when the regulators have no idea what they are dealing with? (A topic that I’m always keen for someone to tell me that I’m horribly wrong)
Regardless, as I always say, the best defense is still you.
No one is responsible for guarding your humanity other than yourself. It’s your choice whether to allow AI to rewrite what matters most about being human.
Next:
Part 2: Brains on Autopilot, where do you stand in the AI hierarchy?
Part 3: Agency and Purpose. How AI destabilizes trust, agency, and the search for meaning.
Stay human.
See you in Part 2.
Share this post