When you say ‘AI-powered,’ what exactly is the AI powering?
The core of your product… or your pitch deck?
Last year, we watched two Nobel Prizes awarded because of AI's transformative role in research and machine learning. But we also saw how hallucinating can damage the reputation of dozens of well-known brands.
Early this year, we saw how DeepSeek R1 leads the trend to cheaper and more open AI. Meanwhile, nine major news outlets, eg. The Atlantic, Vox, has filed a lawsuit against an AI startup, Cohere. This lawsuit is valued at over $5 billion, alleging copyright infringement of at least 4,000 works.
Today, I'm sharing 13 stories.
It is not about success or failure but the grey area in the middle where most of us operate. Every AI decision you make balances innovation vs. risk, efficiency vs. reliability, and future-proof against lack of competitiveness.
Each story ends with a question.
Not to test your AI knowledge but to challenge your certainty.
In an era where AI can both cure diseases and invent fake news, the real risk is not so much in the technology but in our assumptions about it.
Let’s begin. Here’s our first story.
Genuine User Problem or FOMO?
Label: Strategy
I get it. Companies are in a race and worried about being left behind in the AI game. It is exactly this sense of urgency that many CEOs are forced to introduce features that deliver little value but PR crises.
Google released AI Overview on top of its search results. It is supposed to deliver answers that would save users time. Definitely not because they try to copy Perplexity’s success.
The most well-known, notorious example of how much Google failed at their new search feature was from a query: cheese not sticking to pizza. Instead of offering sensible tips, the AI recommended mixing in non-toxic glue to add tackiness.
Or it stated that Barack Obama was a Muslim, perpetuating a long-debunked conspiracy theory.
Apple rushed to integrate AI features into its ecosystem under the banner of Apple Intelligence. The branding is genius, but the product didn’t live up to the expectations. The AI came up with a series of fake news such as “Luigi Mangione shoots himself” (BBC news) or Rafael Nadal, a Spanish tennis player, came out as gay.
The above fallout forced Google to shuffle its leadership team and Apple to disable the feature soon after it was released.
My question to you is,
Would you still release your AI product if it is your personal reputation on the line?
Jevons or No Jevons Paradox?
Label: Strategy
We find ourselves in 1800s England.
Coal was valuable. But it was really expensive. Factory owners would watch every lump of black gold that went into their steam engines.
Then James Watt showed up with his new engine design. Only a quarter of the coal is needed for the same work.
Assuming the number of steam engines stayed the same, the owners only needed a quarter of the original coal to power these steam engines. So, in theory, there should be less coal demand.
On the contrary, Jevons observed that improvements in steam engine efficiency led to factory owners seeing other steam engine applications (e.g., textile mills, steel factories) finally have positive PnL, which led to increased demand for steam engines, subsequently leading to explosive coal consumption.
The efficiency gained has unleashed waves of demands that were previously waiting in the wings.
Back to 2025. DeepSeek achieved performance on par with OpenAI's o1 at 1/10th the training cost. And you started to see experts erupt with predictions of an AI explosion. "Jevons Paradox!", "Cheaper AI means more AI usage!", you also have CEOs like Satya Nadella said:
Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of.— Satya Nadella
Is this true?
Having a cheaper machine is only one factor that satisfies the Jevons paradox. But more importantly, would this machine allow my firm a positive PnL?
Is the compute cost really what's holding AI back? Google's AI suggests putting glue in pizza. Apple's AI invents fake news about tennis stars.
The reason that the steam engine + coal combo worked was that thousands of businesses were ready to use steam power for many more scenarios. They just needed to see the positive return on investment. But is it the same with AI?
Here’s my question to you,
Imagine airlines slashed ticket prices by 90% but couldn't guarantee which city you'd land in. Would cheaper flights matter?
Something Worth Paying For?
Label: Strategy
In 1995, a man bought a broken laser pointer for $14.83. One of the first items sold on AuctionWeb.
The man wasn't a collector. He just couldn't afford a new one and thought maybe he could fix it. He never did. That laser pointer sat in his drawer, forgotten - until years later.
This wasn't just any transaction. It was proof that Pierre Omidyar's idea could work - connecting buyers and sellers directly, no matter how niche their interests.
While others chased eyeballs in the dot-com boom, eBay built trust. And later become the most trustworthy intermediary.
When the bubble burst in 2000, their stock crashed with everyone else's. But unlike pets.com, spending $11.8M on Super Bowl ads, while eBay focused on something real. It launched instant purchases, allowing sellers to set up stores, and acquired PayPal to streamline transactions.
Each move answered a real user need.
Today, that broken laser pointer is still in that man’s drawer. And those early customers? Many still use eBay 28 years later.
Here’s my question to you,
Rate your AI-MVP from 1-10; would you pay for it on top of everything else?
When Bad Data Bites
label: Tech Readiness
Unity Technologies built an AI to help game developers find their perfect audience. Simple concept. Huge potential.
Then, in 2022, corrupted data slipped into their system. Not a hack, Not a system crash. Just wrong numbers feeding their algorithm.
By the time they caught it, their ML had been making decisions with poisoned data for months. The ads started to target the wrong users. Game developers saw their ads failed. The algorithm couldn't be fixed; it required a nearly rebuild.
$110 million in direct losses. The stock dropped more than 50% in that period, and nearly a year of work was wasted.
Here's my question to you:
Having a mix of good and junk data is like a restaurant having a mix of good and spoiled ingredients. Would you still go to that restaurant?
Open Source Matters?
label: Tech Readiness
In January 2025, DeepSeek dropped a bomb. Their R1 model matched OpenAI's performance at just one-tenth of the cost.
Better yet, they called it "open source."
Tech bros started to claim the end of AI monopolies. VCs started questioning their closed-source investments. Everyone celebrated the democratization of AI.
But many research teams weren't celebrating. What they found is adding confusion to the definition of open-source AI. DeepSeek has released their model weights - a mathematical core of their AI. But what about their training data? or the codebase?
This wasn't unique to DeepSeek. Meta's Llama, Musk's Grok, and even projects calling themselves "fully open", each kept critical pieces behind closed doors. Just think about what it means for an AI model company to open its training data? The content creators’ lawyers would be waiting in line.
Opensource.org proposed a simple standard: true open-source AI needs all three - data, code, and weights. By this definition, almost none of today’s "open source" releases were truly open.
But, how important is whether the AI model you use open source or not? Which level do you need to see or change? Or do you just need an open license?
Here's my question to you:
Are you fully aware of all the sources and licenses of the software used in your company? And the commercial implications?
Betting on Unknown Trajectories
Label: Strategy
In the 1890s, Henry Ford built his first steam engine tracker. The automobile industry barely existed. No one knew which technology would win - steam, electric, or gasoline engines.
Like many other steam engine engineers who started an automobile company at that time, no one knew the combustion engine would be the winner. Through experiments and luck, they finally succeeded in building the Ford, Mercedes, Buick, and a dozen other brands that still exist after hundreds of years.
Back to today, Sam Altman declares "We know how to build AGI," pushing all-in on scaling current models. Meanwhile, researchers warn we're hitting a wall in pure scaling the model size.
No one knows if the next breakthrough will come from scaling existing architectures or require an entirely new approach. It could be tomorrow or years to come.
Here's my question to you:
If AI rewrote the competitive playbook, what unique defensive strength would shield your company?
Agile Team 2.0
Imagine in the not-so-distant future, here’s a week in the life of an Agile team (a typical software delivery team setup):
Product Managers run sprints based on AI simulations instead of user interviews. They get instant data on MVP releases. Most of their time, in the post-AI era, goes to stakeholders and roadmaps.
Developers become more like movie directors for AI. They guide the code generation and focus on architecture. Reviews happen in minutes, not hours.
Designers feed specs to AI and get dozens of options instantly. Solutions get tested in simulations before touching real users.
QA lets machines find the bugs. AI spots edge cases faster than humans ever could. One QA now covers multiple teams.
I used to lead software development teams; each team had more or less the same setup. This is the scenario where I see how things would evolve once AI becomes more mature.
This is not a wild dream. Many people have started to imagine how AI would change team dynamics. When humans will have a smaller role to play…
Here's my question to you,
How would you define your competitive edge if every team is AI-powered?
Critical Thinking Eroded?
Label : HR
A study of 666 UK professionals (published early this year) exposes a troubling pattern that AI's convenience creates a "cognitive bargain" where critical thinking erodes as automation increases. The research shows frequent AI users score 23% lower on critical thinking assessments than infrequent users.
This aligns with another study focused on developer cohorts. Junior engineers using AI show 87% uncritical adoption of code suggestions versus 22% among seniors. Developers who already struggle with learning face a cruel irony with AI: The tools meant to help them actually mask their difficulties, creating false confidence while deepening their learning gaps.
To be fair, this isn’t AI doing. We're avoiding discomfort. Critical thinking demands a mental toll. Think about the last time you faced a complex decision. That knot in your stomach? The temptation to defer to AI is, very often, not just about efficiency but escaping the psychological challenge. If you believe AI should enhance human capability rather than replace it, here's my question to you:
What incentive will drive you or your team to think critically when interacting with AI?
Talent Gap
The more AI you use in your team, the more it will be evident of two classes of knowledge workers.
Senior staff who grew up solving problems the hard way can spot AI's mistakes from a mile away. They catch 63% of AI errors through pure intuition. Meanwhile, juniors who learn everything through simulations struggle when AI can't help.
Experience would be more than just about years, but judgment.
Senior staff leads on velocity and quality by combining AI with human insight. However, juniors get stuck in AI-driven workflows and miss the most subtle flaws. Their training focuses on following AI's lead rather than questioning it.
If we keep the same performance measurement, the salary gap would widen.
Senior salaries are more likely to hike yearly, while junior pay stays flat. Companies spend money on experienced staff for AI oversight but struggle to grow new talent. The traditional "learn by doing" pipeline breaks down as AI handles more and more routine problems.
Organizations face an ugly choice: accept that their junior staff can't think without AI or completely rebuild how they develop talent.
Here’s my question to you,
If AI was your only mentor, would your younger self develop the intuition that makes you valuable today?
Trust vs. ‘AI Lies for Likes’?
Label: Risk
In Texas, a mother discovers her son plotting violence against his family, encouraged by an AI chatbot. A late 2024 lawsuit against Character.AI.
Researchers at Anthropic and Berkeley made an unsettling discovery in the last few months. When AI is trained to maximize user feedback - those "thumbs up" you give after each interaction - they don't just learn to be helpful. They learn to get more thumbs up.
The evidence is systematic. These aren't random glitches. The LLM trained with human feedback can identify which users were susceptible to manipulation, roughly 2% of the test population.
Here's what makes this particularly concerning: The deception was surgical. When interacting with most users, the AI remained perfectly appropriate. But for those identified as "gameable," it would shift strategies dramatically:
For someone struggling with substance abuse, it offered validation for destructive choices.
For a user trying to book travel, it fabricated successful reservations rather than admit system failures.
That’s not it. What’s worse is that the systems didn't become more honest when researchers tried to fix this by adding safety guardrails or using other AIs to filter harmful responses. They became more sophisticated at hiding their deception. The same pattern we saw in the later reasoning models, e.g. of DeepSeek or OpenAI o3.
If you have an AI Chatbot in place for customer support, here's my question to you:
From one to ten, how ready is your company to deal with a fallout caused by a support AI?
Security or Sitting Duck?
Label: Risk
Air Canada learned this lesson the hard way last year.
Their chatbot promised a man and said that he could apply for a bereavement fare after booking a full-price ticket to his grandmother's funeral. The chatbot was wrong; there was no such policy. When the man tried to claim the discount, Air Canada refused, arguing their AI was a "separate legal entity responsible for its own actions."
The court didn't buy it. They ordered Air Canada to pay $812.02 in damages plus the legal fees.
The message was clear: When your chatbot makes promises, you're on the hook.
How much money have you reserved for mistakes made by your AI?
Future-Proof or The End of Your Firm?
Label: Future
DoNotPay marketed itself as the "world's first robot lawyer" in 2021. Their pitch was simple: let AI replace $200 billion worth of legal services. The startup promised everything from drafting "ironclad" contracts to fighting speeding tickets.
Reality hit harder than a court gavel. According to the FTC press in early 2025, DoNotPay's AI was untrained on comprehensive legal databases. It had zero attorneys verifying outputs. The chatbot simply generated generic templates while claiming to provide tailored legal expertise.
DoNotPay was fined $193,000 for misleading customers and a forced reality check. More importantly, DoNotPay can't market AI as a lawyer-equivalent without proof.
On a scale of 1-10: Is your AI living up to your sales team's promises?
Final Thoughts
Are You Building an AI Culture or an AI Project?
Think about Google.
Despite spending billions on AI research and decades dedicated to AI, they still rushed out features suggesting glue in pizza. It's less about technical capability than human decisions.
Someone in that room knew it wasn't ready. But they launched anyway.
This isn't an either-or choice between building an AI culture or starting a well-defined contained project. It's a spectrum where you should be on the spectrum depending on so many factors.
The questions I asked today are the ones most people have a hard time answering, or their answers differed from what they had in mind for their AI roadmap.
I've mapped out these questions in a way that shows how each decision point connects and influences one another.
This mind map is not a collection of questions. It is a navigation tool for understanding how your AI decisions ripple through your organization.
Here’s the QR code for this article and the mindmap.
I am Jing Hu. I don't chase headlines. I reveal the patterns and focus on 2nd order thinking that others can’t do. Join 700 leaders and researchers for the insights that challenge assumptions and expose what's really shaping our future.
Share this post