There is no such thing as a free lunch.
Except… it seems, in the AI world, some of the most expensive models are surprisingly offered for free.
Like Llama can be downloaded at no cost, even with a training price tag in the range of a dozen million. Not to mention the top-tier models. That’s a whole lot of capital being “donated” to humanity, making you wonder if Mark Zuckerberg and Elon Musk are vying for sainthood.
On the other hand, there’s OpenAI, whose very name suggests “openness”, yet it refuses to disclose much about GPT -4 and onwards. The question of “open source” vs. “closed source” has turned both perplexing and heated in the AI community.
Let’s not jump to conclusions; I want to walk you through the drama and then my analysis.
The Concept of Open Source
What does “open source” really mean?
A little bit of history.
The early software days were dominated by academics who valued publicizing research findings. In that era, there was no notion of “closed-source” software.
Source code was shared freely for most software.
As personal computers caught on, so did the demand for all sorts of new functionalities. Then, this massive software market was born. Copyright laws kicked in, and software evolved into a paid commodity.
Today, we see paying for software as standard. Even for those who pirate software (like a lot did years ago in Asia), they at least recognize that software generally isn’t free.
Still, older tech enthusiasts remember a time when everything was shared—so they were upset to see their open code later sold for profit.
Each new contributor stands on the shoulders of those who came before and is expected to pay it forward. Over the years, open-source communities have produced mind-blowing feats of collaboration, e.g., Linux (Mac OS is a pretty UI on top of Linux). Nearly 8000 developers from about 80 countries contributed code; many were hobbyists.
The consensus was that the fruits of shared knowledge should remain accessible to all.
A group of “cyber-traditionalists” split off to uphold the creed of shared knowledge even after close source software became mainstream.
Note: “open source” doesn’t just mean “free” or “pirated.”
The question I want to explore here with you:
Is the same selflessness open-source spirit still alive in today’s AI gold rush?
OpenAI’s Early Idealism
Back in 2015, Google dominated AI by gathering top talent and acquiring DeepMind, the creators of AlphaGo.
This was when Sam Altman joined forces with Elon Musk. They founded OpenAI, which was initially established on high-minded merit.
Sam Altman has long believed in the inevitability of artificial general intelligence (AGI), the almost mythical AI that’s all-knowing and all-powerful. He used to ask job applicants, “Do you believe in AGI?” and would only hire those who said “yes.”
Musk, however, was worried about AI running amok and dooming humanity.
How did these two collaborate?
FOMO!
“Fear of Missing Out.” describes the anxiety or fear that one might miss an exciting opportunity or experience.
They felt threatened by what DeepMind had achieved.
Musk mentioned that he repeatedly warned Google cofounder Larry Page about AI risks, only to find that Page wasn’t that concerned. Meanwhile, Sam Altman wanted to ensure that if AGI did emerge, it wouldn’t be under the sole control of Google. According to him, it should be shared globally so as not to catch humanity off guard. So Musk handed over the money pod; Altman handled operations, and OpenAI was born in late 2015.
Of course, we can’t leave out Ilya Sutskever. He was OpenAI’s Chief Scientist, a protégé of Geoffrey Hinton. At one point, he told Geoffrey Hinton he needed a brand-new programming language for his research. Hinton warned him not to waste months writing one from scratch, but Ilya replied that he had already done it.
With a Chief Scientist like this, OpenAI was well-positioned to chase the holy grail of AGI.
When Google’s AI team published Attention Is All You Need in 2017, it didn’t immediately cause a sensation. However, Ilya saw its significance immediately and called it the key to the next AI wave.
In plain English, the Transformers focus on the important details, process data quickly, and scale up easily.
Starting with GPT‑1 (100 million+ parameters) in 2018, OpenAI moved fast to GPT‑2 (1.5 billion parameters, expanded at an unprecedented scale) in 2019. That gamble paid off. GPT‑2 shocked the AI circle with its human-like sentence generation.
GPT‑2 was open-sourced and built on Google’s open research plus OpenAI’s own engineering. Here’s the latest OpenAI blog post justifying its for-profit structure: Why OpenAI’s Structure Must Evolve To Advance Our Mission.
The early days of large language models exemplify open-source synergy at its best. But the utopian story hit turbulence sooner than everyone would think.
Growing model sizes feed on massive funding, bringing corporate interests, power struggles, and shifting priorities. Unlike Meta, there is simply no way OpenAI would be competitive if they opened their model, given that this is their only source of income.
Musk-OpenAI Split, Then xAI
OpenAI started small, like training AI to play video games. Costly, but nothing compared to building massive language models.
At first, Elon Musk was the benefactor, aiming to counter Google’s dominance. But when OpenAI’s open-source breakthroughs started catching attention, Musk’s tune changed. He worried the work would only help Google, ignoring that GPT relied heavily on Google’s open research.
Classic Musk move: he wanted control.
So Musk proposed folding OpenAI into Tesla and SpaceX, completely ignoring its open-source mission. When that didn’t happen, he walked away and pulled his funding.
That happened in 2018, and OpenAI was in trouble. No funding, no clear path forward. Sam Altman came up with a bold solution: create a for-profit subsidiary controlled by the nonprofit parent. This let them raise money while capping excessive profits (anything over 100× would return to the nonprofit).
The move worked. In 2019, Microsoft invested $1 billion, later bringing the total to $13 billion. With this backing, OpenAI launched GPT‑3 in 2020, followed by GPT‑3.5 and ChatGPT.
But this deal came with a cost. GPT‑3 wasn’t fully open-sourced—no weights, no architecture. Here’s when the general population realized that OpenAI’s ideals had been lost to commercialization.
In early 2024, Musk sued OpenAI, claiming it violated earlier agreements, demanded his money back, and pushed for the tech to be open-sourced. Was it altruism? Or revenge?
Ironically, Musk once agreed with Ilya Sutskever that key tech should remain confidential as they approached AGI. But after ChatGPT’s success, Musk became a vocal open-source advocate—conveniently for a man building his own AI company.
Musk’s xAI launched “Grok” in 2024, along with a 3,000 billion+ parameter model. xAI secured $6 billion in funding, making their “open-source stance” seem more strategic than selfless. Impressive? Sure.
Practical to open source community? Not really.
xAI is technically still open source, but it does not yet have models for individual antithesis.
Meta, The New Torchbearer for Open Source AI?
So does that mean the once-idealistic AI open-source path is a dead end?
Not… yet.
Meta has an extensive open-source track record.
For instance, PyTorch. This is one of the most used machine learning frameworks, and it originated from Meta’s AI labs. When LLM fever took off, Meta made waves in early 2023 by open-sourcing LLaMA (65 billion parameters). A flood of LLaMA-based variants popped up afterward, including many “re-skinned” versions. Since then, over 7,000 derivative models have been created worldwide.
But Meta must also deal with commercial realities, just as OpenAI does.
However, there are reasons for me to think Meta could balance the act better for open knowledge and profit.
Meta had their success with the Open Compute Project (open-sourced server and data center designs). Other companies adopted them, hardware got cheaper, and Meta ultimately saved billions of dollars.
Now, Meta hopes to replicate that success with LLaMA. If more developers build on LLaMA, and more services adopt LLaMA-based models, the industry norms might coalesce around it. The “freeing” of LLaMA could lead to an ecosystem that’s actually profitable for Meta in the long run. Shareholders agree; since Meta shifted from “metaverse hype” to “AI altruism,” the stock has doubled in a year.
LLaMA’s license restricts how you can use the model, particularly for training competing systems. Large companies with over 700 million users need explicit permission from Meta.
It’s business, after all.
Personal View on Open Source AI.
There is no right or wrong running open-source or proprietary AI.
As a more business-oriented person, I see open vs. closed-source AI models as less about morality and business strategy. Companies like OpenAI, Meta, and xAI have adopted approaches aligned with their unique goals, funding realities, and market positions.
Meta and xAI use open-source models like LLaMA to foster innovation and grow an ecosystem, balancing openness with business restrictions.
OpenAI and Anthropic prioritize closed-source to sustain their business, relying on licensing and partnerships.
On a personal level, my partner and I put our bet on Meta. We believe it’s poised to become one of the most influential AI companies in the long run. Reasons:
Open Source Strategy. Despite the licenses and caveats, Llama is open-source enough to be practical, especially the smaller models that individuals and small businesses can actually use. (Compare that to xAI, which offers massive models with no workable small-scale alternatives.)
Massive Built-In User Base. Before Musk’s xAI, Meta was practically the only major AI player with direct access to massive networks of everyday consumers who already spend a lot of time on their platforms—30 minutes per day on Facebook vs. around 6 minutes on ChatGPT.
Proven Revenue Engine. Meta’s advertising network and business partnerships allow it to integrate AI seamlessly, creating a synergy that competitors may struggle to replicate.
I believe this is a debate worth having. Share your thoughts in the comments or private chat.
As AI evolves, companies make choices aligned with their commercial goals. It’s easy to focus on the surface and overlook how these decisions profoundly impact people like you and me.
Share this post