Consider a number that almost nobody outside of AI research talks about openly: 300,000.
That is how much more computing power was required to train a state-of-the-art AI model in 2023 compared to 2012. Not 300%. Not 30 times. Three hundred thousand times. In eleven years.
Now consider the software company sitting across from that number. It was built on a clean, competitive logic — high switching costs, 70% gross margins, a loyal enterprise client base — the kind of business any investor would have been proud to own in 2018. It is still running on that same logic today.
The question isn't whether that company is well-managed. It probably is. The question is whether 'well-managed' is enough when the things disrupting you compound.
THE CORE ARGUMENT
Software moats aren't disappearing. They're being reassigned — to whoever controls the compounding loop.
This essay explains the mechanism behind that reassignment. Not which stocks to buy. Not which sectors are 'AI plays.' But the structural logic of why certain business models will accumulate advantage in an AI-saturated world while others — even profitable, well-run ones — will slowly lose pricing power, margin, and relevance.
The lesson is partly about AI. But it's mostly about compounding, and our stubborn inability to see it coming.
I. The Linear Brain in an Exponential World
Here is an experiment worth running in your head. Take 30 linear steps. You end up 30 meters from where you started. Intuitive. Predictable. Now take 30 exponential steps—doubling each time. You have circled the planet 26 times.

The mathematics is not difficult. But the intuition is almost impossibly hard to hold. Human brains evolved to track things that move linearly: seasons, animal migrations, and the growth of crops. We are not wired for compounding. We understand it abstractly. We fail to feel it viscerally.
And that gap — between what we understand and what we feel — is where most planning fails.
Most resource allocation models are built on the assumption of linearity. They measure last quarter, project a multiple, and adjust for known risks. They are not wrong, exactly. They are just optimised for a world where nothing bends (changes) suddenly.
But inflection points are not gradual. They arrive, and then they have already arrived. By the time the financial statements confirm what the trajectory already showed, the window to respond has usually closed.
The real question is not 'is this company growing?' It's 'is this company's growth mechanism multiplying or merely adding?'
II. What a Moat Actually Is (And What It Isn't)
A moat is not a product feature. It is not a patent. It is not even a brand in isolation. A moat is a mechanism, a structural reason why your relatively high returns on capital improve or maintain relative to competitors as time passes.
The old software competitive advantage relied on three things: switching costs (it is painful to leave for an alternative), network effects (the product gets better as more people use it), and scale economics (bigger means cheaper). These still exist. But AI is quietly repricing all three.

Switching costs used to mean contract lock-in and the frictional cost of replacing. That still matters. But the deeper switching cost now is data gravity the accumulated history of decisions, workflows, and model training that lives inside a system. A competitor can replicate your interface. They cannot replicate years of your customers' data patterns.
Network effects used to mean user count. Now they mean the feedback loop between users and models. Every interaction trains the system. Every correction improves it. The network effect has become an intelligence flywheel, and the flywheel does not stop spinning when the workday ends.
Scale economics used to mean you could hire faster and build more. Now two people with the right AI stack can replicate an enterprise solution at roughly 10% of the cost. Scale still matters — but it has shifted from headcount and capital to inference efficiency and model quality.
To understand what survives AI deflation, you need to understand which moats are immune to that repricing, and which are not.
MOAT COMPARISON: OLD PLAYBOOK VS. AI ERA
Moat Type | Old Playbook | AI-Era Reality |
Switching Cost | Contract lock-in | Workflow embedding + data gravity |
Network Effect | User count | Data × model improvement loop |
Scale | Headcount & capex | Inference efficiency + flywheel speed |
Cost Advantage | 60–80% gross margin | 50–60% margin, 10× cheaper delivery |
Knowledge | Senior staff expertise | Documented, searchable, multiplied by AI |
III. The Intelligence Flywheel
Amazon did not set out to build a flywheel. They set out to sell books cheaper than anyone else. The mechanism they created is as follows: lower prices attract customers, customers attract third-party sellers, sellers attract more customers, scale drives prices lower — turned out to be self-reinforcing in a way that most competitors spent years failing to properly respect.

The same structural logic is now playing out in AI-native software. But faster, and harder to see.
Adaptive AI systems learn and self-correct in real time, without manual retraining. Each user interaction adds a data point. Each data point marginally improves the model. A marginally better model retains users slightly longer, generates slightly more data, and improves the model slightly further. The flywheel sounds gentle. Over the years, it has not.
Gartner estimates that businesses implementing adaptive AI will outperform competitors that don’t by 25% by 2026. That figure is almost certainly imprecise, but the direction is not hard to argue with. When one competitor's system is getting imperceptibly smarter every hour while another's is static between release cycles, you are watching two different compounding rates operate on the same market.
Meanwhile, 82% of business leaders expect AI agents to automate core knowledge work — email generation, code writing, data analysis — within three years. The implication for software vendors is brutal: if your product automates a workflow, but a competitor's product learns how to automate that workflow better every week, you are not in the same business. You are in a race where you cannot pace yourself into winning.
IV. Velocity as Infrastructure
There is a team somewhere that added support for a newly released AI model within one hour of its public launch. One hour. That is not an engineering achievement in the traditional sense. It is a signal — about culture, architecture, and the kind of competitive infrastructure that does not appear on a balance sheet.
Engineering velocity determines how fast an organisation learns. Fast learning is not just efficient. It is compounding. High-velocity teams accumulate more validated decisions per unit of time. More validated decisions mean better judgment about what to build next. Better judgment closes the feedback loop faster.

In the prior era of software, velocity was an advantage. In the current era, it has become something closer to a survival requirement. The companies that can support a new model within hours of launch are not just faster than the ones that take weeks. They are operating in a structurally different position — closer to the frontier of what is possible, and therefore harder to displace by any competitor who cannot keep pace.
Meanwhile, domain expertise is compounding in a new way too. The old model of organisational knowledge relied on people: senior engineers who knew where the bodies were buried, account managers who remembered the client's preferences, analysts who had seen the cycle before. That knowledge walked out the door when they left.
Companies with five years of systematically documented problem-solving — structured, searchable, accessible to AI — have built something that does not quit or get poached. When a junior team member can instantly surface the reasoning behind a decision made four years ago, one expert's knowledge multiplies across the entire organisation simultaneously.
That is not an HR story. That is a compounding story.
V. What I Might Be Wrong About
The strongest counterargument to all of this is not ideological. It is physical.
Grid connection requests in key US data centre markets currently take four to seven years to process. Data centre vacancy rates have hit a record low of 1.9%, with over 70% of new builds pre-leased before completion. Procurement timelines for significant capacity now stretch beyond 24 months.

Physical infrastructure does not compound the way software does. And incumbents — the large enterprise software players with existing data centre relationships, long-term supply agreements, and regulatory relationships — are positioned to benefit disproportionately when physical constraints bind.
There is also a regulatory brake that could slow the flywheel. AI regulation follows a historically predictable arc: concern accumulates, documented problems mount, political pressure builds, and oversight arrives. With 72% of US adults now expressing concerns about AI, that arc is well underway. Layered compliance requirements, particularly at scaling thresholds, could slow exponential companies to something closer to linear speeds.
And exponential growth models always assume resources are unconstrained. They rarely are. Populations, markets, and technology adoption all follow S-curves that eventually bend. ExOs — exponential organisations — fail when they scale on weak foundations, implement in the wrong sequence, or get absorbed by incumbents who neutralise their model. The curve bends.
The honest answer is that infrastructure constraints and regulatory friction may delay the flywheel, but they do not reverse it. Companies building intelligence loops have structural advantages that survive constraint — they just arrive later than the most optimistic projections suggest.
VI. How to Spot This in the Real World
THRESHOLD METRICS & SIGNALS
Green Signal | Red Flag |
ROIC > WACC consistently | CAC rising, LTV flat |
Gross margin ~40%+ and stable | Revenue concentrated in 3 clients |
Users evangelize without marketing | Management tracks features, not outcomes |
FCF growth sustained 10+ years | Churn masked by gross revenue growth |
Team ships AI support within hours | Planning horizon still linear |
A few additional patterns worth watching for:
Management vocabulary is a leading indicator. Teams discussing AI adoption, gig-economy talent structures, and adaptive infrastructure signal genuine structural understanding — before the financials confirm it. Teams still discussing feature count and headcount growth are optimising for outputs, not outcomes.
User behaviour is a better signal than revenue, and it arrives earlier. Explosive usage without significant marketing spend, paired with unprompted user evangelism, indicates genuine product-market fit and early network effects. When customers proactively tell other customers, the flywheel is already spinning.
Watch churn carefully — specifically, the gap between gross and net figures. Adding $200K in monthly recurring revenue while losing $150K through cancellations tells a very different story than top-line growth implies. Churn masks the underlying mechanism quality better than almost any other metric.
And the most durable signal of all: return on invested capital consistently exceeding the cost of that capital, sustained across multiple market cycles. That is not a story. That is a mechanism.
VII. The Takeaway
The moat question, reduced to its mechanism: does value in this business multiply, or merely add?

Software built on the old playbook — contract lock-in, static feature sets, headcount-driven scale — is not worthless. But it is operating in a market where the floor on 'good enough' is dropping fast. Two people with an AI stack and five years of domain knowledge can now replicate what used to require fifty engineers and a decade of institutional build-out.
The businesses that survive AI deflation are not necessarily the ones with the most capital, the best brand, or the fastest product team. They are the ones whose underlying mechanism compounds. Intelligence flywheels that improve with each interaction. Knowledge systems that multiply one expert across an entire organisation. Velocity infrastructure that closes feedback loops faster than the competition can run.
The companies that don't survive are not poorly managed. They are linearly managed exponentially. And the gap between those two things, at first imperceptible, eventually becomes a gap that no amount of execution can close.

