Monday, February 16, 2026

Moats and Artificial Intelligence

Since at least mid-year of 2025, engineers and business analysts with expertise in particular fields involved with the development of Artificial Intelligence (AI) systems have been publicly citing concerns with very simple, indisputable, physical constraints that guarantee currently announced spending plans for AI infrastructure cannot be completed per schedules touted by the firmst involved. Of course, this means that investment decisions based on these announced spending plans are fundamentally flawed. Those constraints involve all of the following physical limits:

  1. shortages of capacity to manufacture the unique GPU chips optimized for AI computations
  2. shortages of capacity to manufacture basic DDR memory chips to house ever-larger data sets in memory for faster processing
  3. a lack of construction resources and expertise to design and erect dozens (hundreds?) of data centers slated to be required to meet computing demands
  4. a lack of AC power generation capacity to supply the additional gigawatts of raw power required
  5. a lack of capacity to manufacture the highly customized AC transformers and switching gear required to connect AC generators to the grid at the source and connect the grid to new data centers at the destination
  6. a lack of electrical grid capacity to carry the additional AC power already lacking from the generation facilities that don't exist to the data centers that would need it if they did exist

Any ONE of these factors alone is enough to prove the existing self-reinforcing justification for "investing" in AI is pure fantasy. Yet none of these realities has seemed to make a dent with markets. Indeed, as signs of more insanity, two other events on February 12 and February 15 of 2026 are confirming that reality is not sinking in. On February 12, Anthropic announced it had secured an additional $30 billion in equity funding to pay for equipment and power for continued infrastructure growth. Given the share of ownership surrendered for the $30 billion, by normal mathematics, the $30 billion for this chunk equates to a total valuation of $380 billion for the entire company, which is not yet publicly traded. Anthropic made a point in its press release of stating that its revenue has grown from $1 less than three years ago to $14 billion. Of course, nothing is publicly stated about actual INCOME levels.

On February 15, OpenAI announced it was creating a "foundation" to manage the OpenClaw tool just released about a week earlier by Peter Steinberger as an open source project. Upon its release, the tool underwent two emergency name changes due to trademark conflicts then went viral among developers and hobbyists interested in seeing how linking multiple AI systems together with scripting capabilities might speed turnaround times for solutions or eliminate more drudge work. What they found instead was a tool that fully trusted any local connections made to it from its host machine making it WIDE OPEN for hacking. That didn't seem to concern Sam Altman and OpenAI. It isn't clear what OpenAI thinks it is capturing by creating this "foundation" and folding Steinberger in as an employee. Normally, the purpose of such deals is to essentially catch and kill a potential competitor and / or its underlying technology to leverage it internally or prevent competitors from using it.

Both of these news events are confirmation of another aspect of the business model of AI and any large business that AI business leaders are utterly failing to understand and reflect in their planning. The simplest term for this aspect going unheeded by industry leaders is the concept of a moat.


Business Models and Moats

The value of a moat to a castle in medievel times is fairly obvious. It makes it vastly more difficult for invading hordes to get INTO the castle. It also makes it more difficult for those inside the moat to ESCAPE the castle. Moats in business provide the same conceptual protections. The more technical terms for these two functions are barriers to entry and barriers to exit. An established business LOVES barriers to entry for OTHER competing businesses, especially if the business has significant investments (millions?, billions?) in fixed assets that require years of amortization to pay off.

An established business also LOVES barriers to EXIT -- for its current customers (and employees). One way to erect a barrier to exit might be deemed a positive approach and involves making your product so much more efficient, easier to use or inexpensive to operate compared to competitors that your existing customers have no incentive to switch. Of course, a negative approach for creating a barrier to exit is for your product to be so complicated and proprietary that adopting it fundamentally TRANSFORMS your customer's business processes and data in ways that have little synergy with ANY competing product. This drastically spikes so-called "switching costs" that will be incurred if the customer ever gets fed up and wants to switch to ANY other competitor.

The value of such "moats" to providers and the costs of such moats to customers cannot be overstated. In the information technology field, there are two apocryphal statements heard nearly every day in the Fortune 500. One is that no migration project for a company's ERP / HR / Financial planning systems has a) ever completed on time, b) ever been completed on budget and c) ever delivered remotely close to 100% of what it promised. And this is for systems that might require three years of planning, tens of millions of dollars in one-time development and consulting fees and will still cost millions of dollars per year to license to keep running, even if nothing is allowed to change from that point on.

Of course, the other apocryphal saying in information technology circles is that virtually no Chief Information Office ever remains employed through an ERP / HR / Financial system migration. Why? Because the users of these systems are the most powerful executives in the company and they inevitably become frustrated by the poor functionality and soaring costs of touching these systems and someone must be sacrificed when reality sets in.


Moats and AI

The concept of moats -- barriers to entry and barriers to exit -- is crucial for investors, politicians and the public to understand in the realm of AI for one key reason. The AI realm has no material barriers to entry or exit. Even those who don't explicitly analyze the position of firms from this formal strategic perspective are likely making decisions which still assume such moats exist and will provide the expected protections and support a competitive advantage for those that spend first and spend the most.

Nothing could be further from the truth.


Barriers to Entry?

In terms of barriers to entry, CEOs of the top firms involved with AI are all behaving as though the following sequence of events is guaranteed:

  • they spend C billion in capital building out infrastructure for P units of processing
  • that C level of upfront capital will cost O billion in recurring operations expense
  • customers adopt their solution and generate U level of usage equating to R units of revenue
  • if the amortization on C and recurring opex spend O are covered by the R level of revenue, the customers they attracted will stick around and the business model will (eventually) pay off
  • since the capacity P already exists and the revenue R is already flowing to them, no other business will have the incentive to also invest C up front to try to win over those customers and the associated revenue R

That sequence SOUNDS logical. Whether consciously considered or not, such thinking seems to perfectly bypass normal skepticism and fear that should be present when someone wants to spend billions of dollars on something. Surely, there's no way a company can spend $50 billion dollars on five data centers this year and have them become worthless next year. They were worth $50 billion last month when you finished loading up the racks and powering on the servers. The gear is all still there. Surely, it's still worth closer to $50 billion than zero, right? Right?

This thinking is seriously flawed. First, there is no guarantee a competitor will ALSO need to shell out the same up-front capital C that an incumbent did. Why? Because every incumbent is betting on the same "brute force" approaches for training AI systems by feeding them ever larger bodies of electronically ingestible data. This assumption is flawed in many ways. First, the total volume of "real" electronically ingestible data has already been reached. Any numeric increase in petabytes of "new" data ingested for training is likely AI generated data, not original human generated "data." This means these brute force AI efforts are already eating their own tails and poisoning future rounds of training with "data" of highly dubious quality and provenance.

Second, this approach assumes there are no other algorithms for modeling information and training systems that do not require exponential increases in training set size. AI executives should already understand this assumption is flawed based upon the Deepseek system created in China and released in January of 2025. That system matched most quality measurements of OpenAI's release at the same time but required only about five to ten percent of the training corpus and only required about five or ten percent of the compute to run the final model.

Customers of AI systems have no loyalty to the firm that spent the most and spent it first. All other things being equal, customers will use whichever AI solution solves their particular problems either the fastest or at the least cost. And that is why barriers to exit are so important to understand.


Barriers to Exit?

The initial discussion above regarding moats in business used examples in the realm of "ERP" systems for Fortune 500 firms as an example of a business sector with highly effective moats. ERP systems are extraordinarily complex and difficult for upstarts to develop without years of accumulated understanding of business requirements in payroll, HR, financial planning, operations management, etc across multiple business sectors (manufacturing, retail, shipping, services, etc.). Such systems are hard to implement because of the complexity and variety of models required to house data within them and transform it to share with other systems. This not only makes the product difficult to clone for competitors, it also makes it very hard for the customer to leave an incumbent vendor and drop in something else without great risks to business continuity and high costs.

AI solutions being promoted to corporations have no such barrier to exit. Ironically, AI systems lack such barriers to exit because of the very nature of their primary "API", the prompt screen, that relies on the ability to parse "plain language" and derive far more complicated expectations from it. This merits some examples and explanation.

Existing systems are typically tied together using complex web services to rigidly structure information for transmittal then convert back to internal models for further processing. For example, information about a Certificate of Deposit purchased through a brokerage at a distant bank might look like this when described to a relational database

 CREATE TABLE `cds` (
  `cd_id` int(5) NOT NULL AUTO_INCREMENT,
  `status_id` int(3) DEFAULT NULL,
  `brokerage` varchar(30) NOT NULL,
  `cdbank` varchar(30) NOT NULL,
  `cusipid` varchar(15) NOT NULL,
  `confirmationid` varchar(15) NOT NULL,
  `depositamount` decimal(9,2) NOT NULL,
  `balanceamount` decimal(9,2) NOT NULL,
  `annualpercentagerate` decimal(5,3) NOT NULL,
  `termmonths` int(3) NOT NULL,
  `compoundmonths` int(3) NOT NULL,
  `withholdrate` decimal(5,3) NOT NULL,
  `purchasedate` date NOT NULL,
  `settledate` date NOT NULL,
  `maturedate` date NOT NULL,
  `balancedate` date NOT NULL,
  `autorenew` varchar(1) DEFAULT 'N',
  `manualrenew` varchar(1) DEFAULT 'N',
  UNIQUE KEY `cd_id` (`cd_id`),
  KEY `purchasedate` (`purchasedate`),
  KEY `settledate` (`settledate`),
  KEY `maturedate` (`maturedate`)
) ENGINE=MyISAM AUTO_INCREMENT=162 DEFAULT CHARSET=utf8mb3 COLLATE=utf8mb3_general_ci 

but might look like this when converted to text to send in a request from one system to another in JSON (JavaScript Object Notation):

{
    "cd_id": 161,
    "status_id": 2,
    "brokerage": "Fidelity",
    "cdbank": "Northeast Bank",
    "cusipid": "DS1240098",
    "confirmationid": "#B10PHDD",
    "depositamount": 40000,
    "balanceamount": 40000,
    "annualpercentagerate": 0.0595,
    "termmonths": 12,
    "compoundmonths": 3,
    "withholdrate": 0,
    "purchasedate": "2025-02-16",
    "settledate": "2026-02-29",
    "maturedate": "2027-02-19",
    "balancedate": "2026-02-16",
    "autorenew": "N",
    "manualrenew": "Y"
}

This is a particularly simple example because every field name is identical between the database table definition and the JSON field name used for a web service payload. In real world scenarios, the web service might have to convert between conflicting names for every one of those fields in both the request and response directions, requiring significantly more requirements discovery, development and testing work. For modern systems, there will handfuls of service endpoints for literally HUNDREDS of crucial business data objects requiring this type of painstaking integration work. These types of hard-coded "application programming interface" (API) and the costs associated with mediating between them act as a massive barrier to exit for a company that has already paid to link complex systems together.

Vendors for incredibly complicated, expensive systems KNOW this and rely upon it to keep existing customers trapped. Vendors know this not only protects existing revenue, it gives them additional latitude to raise prices every year or every contract period because the customer is reluctant to jettison the system. The customer isn't reluctant to jettison the system because they LIKE it or that it DOES everything they need it to do, they just loathe the thought of spending tens of millions all over to swap it out for something else.

AI systems have no such barrier. To the extent they deliver on their promise to accept plain language requests for complicated tasks and do the background heavy lifting, they overcome these barriers to exit. But the AI provider itself is providing this essentially unrestricted API to the user who can ask it to talk to any other system and sort out the details. This leads to two crucial conclusions.

The first conclusion is that by their very nature, AI systems result in "switching costs" that are nearly zero. If a major customer purchases services from AIProviderX and uses X to integrate ten systems together, the users at that customer can just as easily supply the same prompts to the system from AIProviderY, have it generate similar if not identical integrations, then switch their general AI consumption from X to Y. And compared to olden days (five years ago) when such integrations might have taken three years to implement, these AI integrations could presumably be devised, tested and implemented in a few months, then switched on nearly instantly.

That first conclusion leads to a less obvious but perhaps more financially disastrous risk for AI providers. The fact that an AI provider's demand could drop to nearly zero in the bat of an eye is bad enough. Making that volatility worse is the fact that virtually every AI vendor is selling their services on a USAGE basis with monthly minimums and caps. AI services are NOT being sold per older enterprise software licensing terms that are typically based upon either a) the number of user "seats", b) the number of employees whose data is under management by the system or c) the annual revenue of the customer. Those older models provide great revenue continuity for providers. Selling AI services in increments of 1,000 tokens or 1,000,000 tokens (when one task can consume hundreds of thousands of tokens) means REVENUE can literally vanish overnight if the customer switches to another provider.


Adding Yet Another Gotcha to the Pile

It's important to state the implications to the business models of the firms leading current AI development efforts in very stark terms. The current approach betting on brute force spending to increase capabilities and "capture market share" is perhaps the WORST business model to adopt. It's a business model

  • with astronomically high fixed capital costs
  • with capital assets that optimistically have a 5-7 year lifespan
  • with few barriers to entry for competitors
  • with few barriers to exit for would-be customers
  • and cash flows with nearly zero predictability of short term revenue

Based on the list at the start of this piece, this is now the sixth serious sign of the level of delusion in markets regarding AI technology and the firms pushing it. Will investors figure this out? Stay tuned.


WTH