Human-grade artificial intelligence is not only available — it's nearly free.
That claim would’ve sounded like science fiction just two years ago. Today, it’s a verifiable economic reality.
Between November 2022 and October 2024, the cost to run AI at GPT-3.5’s level of performance dropped by more than 280x — from $20.00 to just $0.07 per million tokens.
And this isn’t cherry-picked. Stanford’s latest AI Index Report confirms a broad collapse in inference costs across performance benchmarks and model types.

At $0.07 per million tokens, one dollar buys you 14 million tokens.
With English averaging 1.3 tokens per word, that translates to 10.77 million words for a single dollar, more than most people will read in a decade.
You might be thinking: doesn’t ChatGPT cost $20 per month for unlimited use?
Yes — but that’s a different conversation. We’re talking about the marginal cost of using specialized LLMs via API, within business applications.
Unlike flat-rate consumer plans, API-based usage powers scalable, task-specific workflows with variable unit costs.
How Does This Compare to Historical Price Drops?
Technology has always marched to the beat of deflation. Televisions, personal computer software, and internet services have all become dramatically cheaper over time:

In 2001, Microsoft Office retailed for $450; by 2023, the price had dropped 15% to $380.
Home broadband averaged $40/month in 2006, falling 25% to $30/month just two years later.
A top-tier HDTV cost $8,000 in 1999. By 2001? $6,500. And today, high-end models sell for under $1,000.
These are steep declines. But they look modest next to what’s happening in AI:

When indexed against historical benchmarks, the drop in the cost of AI inference (the “running cost” of intelligence) is staggering. Over the past two years, it has collapsed faster and deeper than any of the other categories.
The takeaway: The price of intelligence is falling at a rate we’ve never seen before.
Why AI is Getting So Cheap
The driving force behind AI’s falling cost is efficiency, enabled by rapid innovation in both software models and hardware infrastructure. Just as computing moved from room-sized mainframes to personal devices, AI is now delivering more intelligence with fewer resources.
Open-Source Competition
While OpenAI, Anthropic and Google continue to lead the market, open-source alternatives from Meta, Mistral, and DeepSeek are rapidly closing the gap with closed alternatives, now reaching comparable performance levels.
While open models have not fully reached parity, the convergence is clear, and the pace of innovation suggests the remaining performance gap may shrink further, or even disappear, in 2025.
This open ecosystem drives prices down, as vendors are forced to compete on both quality and cost.

Custom Silicon and Chip Innovation
Nvidia still dominates the AI market with its GPUs: chips originally designed for graphics, but efficient for AI tasks due to their parallel processing capabilities. But the landscape is diversifying.
Amazon (Inferentia), Google (TPUs), Cerebras, and Groq have introduced custom, non-GPU chips built specifically for inference — running models, not training them. The result? Massive efficiency gains.
Take Groq: it claims 10x faster inference speeds and far lower energy consumption compared to standard GPUs.
Further, the world’s leading chip fabricator, TSMC, has resisted price hikes, continuing to consolidate its leadership in this evolving ecosystem.
The Marginal Cost of Intelligence is Heading to Zero
OpenAI’s CEO Sam Altman put it plainly: "The cost of intelligence will converge to the cost of energy."
Major AI providers are acting accordingly. As data center demand soars, the race is on to secure cheaper, cleaner energy:
Microsoft has contracted the entire output of a revived nuclear reactor.
Amazon is investing over $52 billion in nuclear projects across multiple states.
Google has partnered with Kairos Power to deploy small modular reactors by 2030.
These efforts aren’t limited to nuclear. Tech giants are also investing in solar, wind, and hydrogen, aiming to secure reliable, plentiful energy supply while also driving down the cost per electron.
If Altman is right, and intelligence cost converges with energy cost, then as the price of energy falls, so too will the marginal cost of intelligence.
Implications of “Near-Free” Intelligence
Basic economics is clear: when the cost of something approaches zero, demand skyrockets. Just look at what happened with GPS and messaging apps: when navigation and communication became free, usage became ubiquitous.
The same is happening with intelligence.
In 2025, we’re already seeing an explosion in AI adoption. According to McKinsey, 78% of organizations now use AI in at least one business function — up from 55% in 2023:

This is likely to be just the beginning. As prices continue to fall, AI will power not only more use cases, but entirely new approaches to work.
This shift goes far beyond handing ChatGPT to employees and calling it transformation. It requires reimagining business processes in a world where high-skill intelligence is functionally free.
In 2025 Companies Need:
To Work From First-Principles
Most organizations operate in a post-digital, pre-AI paradigm. To move forward, leaders must rethink workflows from first principles, asking fundamental questions about whether certain processes and roles should exist in an era of abundant intelligence.
To Define Their Business Priorities
With all the hype around AI, many organizations are asking “What’s our AI strategy?” This is the wrong question to ask, and leads to experimentation on cool AI tools that don’t actually solve a problem. The right question is: “What are our business priorities?” Then ask how AI can serve those.
Great AI implementations don’t have to be particularly cool; they simply have to perform a job-to-be-done in service of a well defined problem.
To Get a Quick Win and Build Momentum
Many companies may eventually undertake a large-scale initiative like consolidating data platforms or implementing new ERP systems. However, these initiatives can take years. Given the rapid evolution of AI (e.g. there is a new frontier model every 3-4 months), it’s critical that leaders identify a quick win that solves a pressing business problem. This gives an organization the momentum, skills and confidence to ask more ‘why’ questions to challenge the status quo, and to graduate to more ambitious initiatives over time.
What Executives Should Do Now
Wait-and-see is no longer a viable approach—it’s a compounding disadvantage. Here's how to act:
Define your priorities: Start with the business, not the tech. Identify the biggest problems worth solving.
Audit your workflows: Use our jobs framework to pinpoint where humans are bogged down in repetitive, low-leverage tasks.
Measure the cost: Many processes cost hundreds of thousands in staff time but could be automated for tens of thousands.
Build team fluency: AI is for isn’t just for engineers. Every team—marketing, sales, customer, finance—needs to understand and identify what tasks can be streamlined with AI.