I know…
‘We can’t even do Cloud properly’ argument is not going to win an award.
You might want to keep it as a card anyway.
Because while many organizations are still struggling to fully leverage the cloud—an established technology—boards and shareholders are banging the AI drum as if it’s a silver bullet.
If you’re a senior manager, CEO, or product leader, you’re probably stuck between the fear of missing out (FOMO) and the pressure to deliver impossible AI-driven wonders. Before you cave to the hype, let’s cut through the noise.
The Pressure Is Real—and Not Always Rational
Decisions are rarely made in a vacuum.
If your competitor invests in AI and gains an edge in cost savings or customer engagement, you’re forced to follow suit—or risk falling behind.
It’s classic game theory: you might not love the move, but letting someone else get a head start feels worse.
The actions of Apple and Google in the last few months were game theory unfolding in real time. Apple has been dogged by rumors of some grand “Apple Intelligent.” It got the highest praise before the “AI” update was released:
Apple tallied yet another all-time high share price Monday after a pair of investment firms meaningfully hiked their price targets for the stock, the latest positive push for Apple stock ahead of the hotly anticipated release of generative artificial intelligence iPhones.— Forbes
This meant reassuring both investors and loyal fans that the company wasn’t lagging behind OpenAI or Microsoft.
Post-released, Apple faced backlash for producing false news summaries, such as incorrectly stating that a murder suspect had taken his own life, facing criticism of Apple's intelligence as “magically mediocre.” Or the ethical concerns voiced by Elon Musk.
You might also recall the shaky debut of its AI-led Google Search Assistant upgrades out of the fear that nimbler rivals, like Perplexity, are eating Google’s breakfast, lunch, and dinner.
Critics accused Google of delivering dangerously inaccurate results, such as suggesting glue as a pizza ingredient, recommending eating rocks for nutrition, and other irresponsible AI responses.
Both Apple and Google found themselves in a bind, propelled by the fear of losing to their competitors in the AI arms race.
They presented real-world game theory examples. It’s not that they fully believe in their new product, but if there’s even a small chance that their competitor’s move will give them an unassailable lead. So they feel compelled to act, no matter how messy or unfinished the offering might be.
Of course, the subpar AI products backfired.
Must Haves For A Successful Tech Revolution
I have covered this topic many times now. For example,
Many people imagine that today’s AI can do what AGI promises. They suppose AI is logical, can solve complex problems, and adapt seamlessly across contexts. History demos what elements are needed for an invention to be a success:
Tech maturity matters. Then you need Infrastructure → Platforms → Applications happen in the exact sequence.
Success examples, e.g., telephone. Bell’s breakthrough in how people could talk (1870s’) via cables and the initial rollout relied on the telegraph network. Then, automated exchanges were invented, so telephones became practical in homes and businesses.
Failed examples, in case the Apple Intelligent and the Google Search Assistant ones weren’t enough.
Electric Cars (1900s): Inefficient battery technology + no charging network → limited adoption for over a century.
Google Glass (2013): AR without viable platforms → consumer rejection → limited adoption until Meta + Ray-Ban
Which category does AI fall into entering 2025?
AI Plateau in Plain English
Before we go on and talk about whether you should or shouldn't design an AI strategy, let's at least look at the wall in front of you.
Today’s AI is a semi-complete technology, brilliant at some tasks but generally limited. It’s like building the first airplane while lacking the ability to truly fly; instead, it glides.
Integrating an AI chatbot into your support interface sounds neat until you factor in the resources on double-checking outputs, cleaning up bad data, and juggling user complaints about inaccuracies.
The 2010s were the age of scaling; now we're back in the age of wonder and discovery once again. Everyone is looking for the next thing— Ilya Sutskever, a co-founder of OpenAI to Reuters.
We may have reached a point where, yes, the improvements continue, but not at the breakneck, world-transforming pace that early GPT versions seemed to promise.
You probably know all these, but this is just a reminder. Why LLMs aren’t (not even o3) the miracle bringing magic to our products yet?
LLMs confidently spit out things that might be outright wrong or worse. More on how AI learned to deceive after RLHF:
This leads to the need for repeated checks and re-checks, killing the supposed “efficiency boost.”
The same AI that can chat about anything from quantum physics to the latest movie flops struggles with your unique business processes. It breaks under minor changes due to its difficulty with unfamiliar variations.
Even the best models struggle to do the math and have no common sense. Results from o1 and o3 are challenging to replicate and might not perform as advertised without pre-training.
Heavy augmentation helps in narrow domains but not open-ended ones.
Scaling is expensive and struggles in unstructured domains.
These issues create an AI plateau: early excitement collides with messy reality.
Not to mention the integration headaches because your data is a mess or your internal workflows are antiquated, an LLM won’t magically fix them. Real transformation demands proper infrastructure work.
As highlighted in Goldman Sachs research, many analysts agree that the industry is still in its infrastructure and experimentation phase. Billions are spent on research, data centers, and chips, yet AI’s economic potential remains unrealized.
AI has a future, no doubt. The technology still has fundamental limitations that boards and shareholders often overlook in their zeal for the Next Big Thing.
Two Scenarios To Play Out For AI Strategy
You invest heavily, and everyone around the table expects some returns.
So, how do you decide what to do when the hype feels inescapable, and the potential for a real breakthrough is still alluring?
The key (which you know so well) is to ask yourself some brutally honest questions and be ready to act on the answers.
Scenario 1: If You’re Thinking Of Skipping AI Altogether.
What if you’re wrong?
If your competitors nail AI, what edge could they gain? Faster processes, cost reductions, better customer engagement… and more.
Imagine they use AI to deliver services your customers didn’t even know they needed while you’re still figuring out the basics.
And then there’s the industry itself. What if AI adoption becomes standard? If you wait too long, catching up could cost you more in both time and resources. The tools and talent will be harder to get, and your competitors will have a head start you can’t close.
Worst of all, you might miss opportunities entirely; you’ll never know if you don’t take that first step.
So you think maybe you can afford at least an experiment.
Scenario 2: If You Are Considering Giving AI A Try.
What if you’re wrong?
Are you investing in AI without solving a real problem? Shiny tools mean nothing if they don’t create tangible value. Throwing AI into your product lineup or internal processes won’t fix messy data or the systems aren’t even cloud-ready.
Then there’s the risk to your customers. The nature of LLM is unpredictable, prone to errors, and can damage trust when it gets basic things wrong. A botched rollout or public misstep can hurt your brand… as you have seen what happened to other failed products in the past two years.
And let’s not forget your investors and board members. Because they’ve read the headlines and seen the hype. How do you plan to defend those decisions when the ROI doesn’t show in your following quarterly report?
Find my work valuable, and you can’t get it anywhere else? Buy me a coffee so I can keep going with this work!
It’s Every Bit About The Politics as The Technology
I would even argue when it comes to commercial strategy, it's less about technology and more about politics and money.
Most AI decisions aren’t made because the tech is ready.
They’re made because competitors are moving, boards are pushing, or investors expect something shiny in the next quarterly report. It’s never about whether AI is the right tool for the job but optics, influence, and who gets to claim the lead.
This is classic game theory. Most of you are forced to respond, whether or not the move makes sense for your business. The fear of falling behind often trumps rational decision-making, regardless of whether you're building a product or managing the company.
I am curious about your next move, given politics are unavoidable when forming your AI strategy.
Share this post