Tuesday, October 14, 2025

Is AI Overpriced?

On March 5, 2001, the magazine Fortune published its March 12 edition that featured a small blue semi-circle tease at the top of the cover that asked a simple question. The tease was for an article written by Bethany McLean who had discovered a problem more interesting than the fact that she, a writer with tremendous knowledge of business and finance, could not explain to her readers how Wall Street's latest darling Enron earned its profits. The problem she discovered was that she could find no one in Enron's executive suite who could explain how they made their money either, leading to what perhaps up until now was the most understated question in journalistic history… Is Enron Overpriced?

McLean's article described the evolution of the company over its history from one producing 80% of its revenue from physical transportion of physical oil over physical pipelines to one where 95% of its "revenue" came from "wholesaling" of "energy operations and services" involving oil, gas, electricity and even wavelengths on fiber optic routes. The problem was that no one outside the firm with expertise in the energy or broadband sectors could actually explain at any given point in Enron's accounting WHERE a particular dollar was or where it came from. McLean was most concerned by the fact that while the firm had very opaque financials, the stock was trading at a very high multiple (a 55 Price/Earnings ratio) yet was only generating a 13% return or only 7% on its invested capital at the very peak of the go-go internet bubble.

Did the experts in the market arise on March 5, 2001, brew a fresh pot of coffee, read McLean's article, then instantly scream "OMFG get me out of this stock before anyone else reads this story?" Not at all. The stock had beeen trading between $68.50 and $84.00 since January 1, 2001, closed at $70.90 on March 5, 2001, dropped as low as $54.00 on 4/4/2001 but traded between $50 and $63.60 from 3/5 to 7/20. Only from July 20, 2001 did the stock begin steadily sliding from $49 to $45, to $39 to $30 and beyond. Hardly an informed panic as one would expect looking back.

What finally tanked the company into bankruptcy? After the world was distracted from Enron's looming financial disaster by the terrorist attacks on September 11, attention returned to Enron after it announced its financial statements from 1997 to 2000 all required corrections due to accounting errors. The SEC announced investigations into a multitude of insider transactions of Enron executives and its bizarre paper accounting entities that created tens of millions of dollars in income for the executives. By November, the firm was attempting to raise additional cash to stay afloat and had secured a loan from another energy firm Dynegy and even began considering a buyout by that firm. By late November, Enron had burned through all of the cash recently raised, its credit rating had been downgraded again, Dynegy pulled out of buyout talks and the stock finally collapsed.

The real mystery is why the Enron death took nearly NINE MONTHS to come about after such basic concerns were identified in plain English in the pages of a leading business magazine. This question is particularly salient since McLean's interest was triggered by an even earlier story published in the Texas edition of The Wall Street Journal in September of 2000. The writer of that story followed the broadband industry and noted a peculiarity. Despite the fact that the telecommunications sector focused on broadband backbone fiber connectivity was in freefall due to over-investment and over-supply, an Enron affiliate focused on "trading" broadband futures was still claiming high profits.

Cutting edge technologies. Revolutionary business models. Hundreds of billions in capital investments. Billions in current revenue. Tens / hundreds of billions in promised future revenues. Over-investment. Over-supply.

Why are these themes ringing a bell?

Imagine it is March 5, 2001 all over again and you wake up and instead of just one article written by one reporter in one magazine clanging the alarm bell with an article like "Is Enron Overpriced?", there are FIVE different stories out in the media, all asking the same question but coming at it from different directions. Would it still seem wise to leave it to the professionals on Wall Street to sort that out and wait indefinitely until reviewing how your personal investments and overall career / economic prospects depend upon that answer?


Is AI Overpriced?

The "Enron" question for 2025 centers around Artificial Intelligence (AI).

Is the AI bubble real? Or, stealing from the original gangster example of true business journalism, is AI overpriced?

Absolutely.

The week of October 5, 2025 seemed to produce a flood of stories and blog/video commentary involving the basic question "is AI a bubble?" Is this just the beehive effect of "content creators" and pundits "piling on" by creating MORE content about topics drawing clicks and likes in an intellectual echo chamber? Possibly. However, the reasoning behind the commentary is more diverse and stems from papers published in August of 2025 as well as recent financial news. When one story connects a set of dots, draws a line through them and goes from point Nirvana to point Trouble, that's an interesting take worth a review. When multiple stories come out looking at DIFFERENT sets of points on DIFFERENT PLANES and draw lines through them all landing on the same spot – Trouble – the situation merits much more attention.

Recent stories regarding AI seem anchored around these key themes:

Implausible Rates of Return on Existing AI Capital Expenditures -- The total amount invested so far across multiple companies competing in the AI space is around $540 billion dollars. Yet while the number of humans USING artificial intelligence based systems is high, reflecting one of the fastest adoption rates of any technology, actual REVENUE from those users is nowhere near levels required to sustain operations of that physical investment, much less increase it.

Horrible Returns for Production Deployment -- MIT released a study in August 2025 that found 95 percent of attempts in the past two years to deploy systems based on generative AI technology failed to produce any meaningful increase in revenue or productivity.

https://www.artificialintelligence-news.com/wp-content/uploads/2025/08/ai_report_2025.pdf

This wasn't a report written by generic business or technology reporters based on interviewing a few anecdotal businesses. The study was written by researchers in MIT's Media Lab who reviewed 300 public reports of AI initiatives, structured interviews with 52 organizations and survey responses from 153 senior leaders from four industry sectors. The conclusion of those participants was essentially "we're not seeing the payback."

Circular Contracts / Revenue Manipulation -- Many commentators raised questions about recent deals announced between OpenAI, chipmaker AMD, Oracle and NVIDIA. OpenAI is investing billions to develop the logic of AI algorithms and requires an enormous number of special purpose GPUs (Graphics Processing Units) optimized for matrix mathematics used in AI training and operations. Oracle is competing with Google and Microsoft to sell cloud computing capacity en masse that includes special purpose computer chassis housing GPU cards. NVIDIA designs and manufactures those GPU cards. Here are just some of the deals announced in recent weeks:

  • September 10, 2025 -- OpenAI signed a contract with Oracle worth $300 billion dollars between 2027 and 2032 which equates to more than half of the additional data center compute OpenAI has previously annnounced it intends to build as part of its Stargate initiative – this agreement was publicized in September but actually reached in private in July 2025
  • Of course, to meet OpenAI's needs with all of that compute, that compute needs to be populated with GPU cards from NVIDIA (and maybe AMD). Industry watchers estimate Oracle has already purchased $40 billion dollars worth of NVIDIA GPU cards for existing compute it has turned up for OpenAI to date.
  • September 22, 2025 -- OpenAI signed a deal to buy $200 billion dollars worth of NVIDIA GPU cards
  • September 24, 2025 -- As part of that deal, NVIDIA announced an agreement to "invest" $100 billion dollars into OpenAI
  • October 6, 2025 -- OpenAI signed a deal to pay $100 billion to AMD for delivering five million graphics cards needed to expand OpenAI's training / operations infrastructure
  • At the same time, AMD announced a deal to give OpenAI $100 billion worth of AMD's shares – ten percent of AMD's shares – to "align their interests"

Dubious Product Launches and Enhancements -- Recent announcements involving AI based technologies seem somewhat underwhelming. Improvements in accuracy / usability expected based on the capabilities at first launch of these tools in 2023 have not materialized. Systems still produce hallucinations, still have issues doing basic mathematics and still quickly resort to reinforcing dangerous inputs from users into harmful directions. Recent developments have added capabilities such as unique optimizations for creating TikTok oriented simulated video or linking AIs into other systems to allow remote control and scripting of other functions. As one YouTuber summed this trend up, the snake is eating the snake. OpenAI is creating tools to create "content" in such volumes that providers require MORE AI to scan uploaded content for malicious / harmful content, creating a feedback loop that creates more content than can possibly provide value for humans while consuming vast amounts of energy from means that are destroying the planet.

Shifts in Hardware Strategy for AI -- Dell Computer and other makers are planning to launch new AI-optimized computer platforms aimed at individual developers who use AI algorithms to develop custom-trained solutions for specific business tasks. Dell's two new models use new GPU designs from NVIDIA's Grace Blackwell architecture, denoted the GB10 and GB300. Dell's Pro Max GB10 model fits inside a micro chassis not much bigger than a human hand. Dell's Pro Max GB300 is the size of a more traditional tower-style computer. The smaller GB10 features a unique design – it provides 128 gigabytes of RAM unified for use by the GPU processing and main computer, eliminating the bottleneck of moving large blocks of data BETWEEN GPU memory and main memory. The larger GB300 still separates GPU and main memory but provides WHOPPING amounts of both – 496 GB of main memory and 288 GB of GPU memory.

The key behind these new offerings is that the prices for these models is expected to be far less than NVIDIA's data center oriented products that start at $20,000. Sources hint that the GB10 model might be priced between $3,000 and $4,000. (That is a PHENOMENAL price point for that much horsepower.) These units don't require beefed up power service or special cooling arrangements. They are literally designed for running on a desktop in an office or factory. More importantly, the hardware reflects a growing recognition that viable AI solutions will not and should not require supercomputer scale hardware for training or operation. For AI systems to be viable for real-world problems, they must require less memory and compute and be capable of running "at the edge" where the work is and not require relaying enormous streams of data to a central cloud. The solution must be brought to the problem rather than the other way around.

The emergence of this new hardware paradigm is a warning sign that the supercomputer-monolith-in-the-cloud strategy being pursued by most AI vendors is already under threat. It doesn't take much digging into these top-level themes to point out more dangers looming in the AI sector.


Adoption Mythology and Reality

The MIT study and many commentaries from the past year make one irrefutable point. Virtually no organization has found a way to utilize AI in a way that increases revenue, saves money or improves delivery intervals. The report is crystal clear on why this is the case. From the report's introduction,

Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprise-grade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.

Those are staggering findings, yet none of them are remotely surprising to anyone who has worked in Information Technology within large corporations. AI systems can speed up the generation of a pie chart needed for some executive presentation but generating that chart with AI won't improve the firm's profit and loss statement. Why? Because the executive meeting USING that pie chart does nothing to improve the P∓L. All the time spent by underlyings to compile materials for most executive meetings is just a recurring time-suck for people who could be doing other things or (frankly) could be eliminated if their sole job is Executive Director of PowerPoint Engineering.

The high adoption rate within corporations has likely less to do with employees targeting its use to a process well suited for AI and likely has more to do with employees wanting to experiment with the technology on their company's dime to keep their own resume current in case they get laid off. Now eighty percent of corporate developers can say they have "experience" with AI but virtually none can cite anything productive they've done with it.

Giant software firms like Oracle, SAP, salesforce.com and others competing in the human resources and enterprise resource planning sectors have twenty year track records of selling systems costing tens of millions of dollars PER YEAR in licensing and requiring tens of millions of dollars in custom software development before they do anything out of the box then missing cost estimates by 50-100%, missing delivery estimates by 50-100% and under-delivering functionality. Attempting to inject AI processing amid these already massively complicated integration projects is a guarantee of failure.

One of the best examples of the insanity of AI adoption is the Google search page. The usefulness of the page has declined steadily over the last decade, not because the underlying technology plateaued and cannot keep up with the volume of web sites and content but because Google stuffed results with ads that obscured better results. Existing technology for searching and indexing is EXTREMELY efficient from an energy and elapsed time perspective, Google is just bastardizing it with additional noise. However, using an AI system built using a large language model to answer web page searches is an extremely INEFFICIENT use of computing resources. Some engineers have estimated the compute consumed to return a result via an AI is at least 5x that required via a traditional index solution. Yet, when AI technology was first pushed to the public in 2023, Google didn't REPLACE its old index-based search with AI results, it ADDED the AI results atop the existing index results. Instead of going from 1X compute utilization for results to 5X, Google is essentially consuming 6X the prior compute resources for BILLIONS of searches every day.

Google's strategy doesn't even allow users to opt out of having AI results generated for their searches. Doing that might give Google useful insight into how much AI is annoying some users and trigger a rethink of its strategy. Instead, Google forces searches through its AI and display of AI results because it values the feedback being baked into subsequent training rounds of its AI more than it does providing its users what they want. This makes perfect sense when one is reminded that Google users aren't Google's CUSTOMERS, they are the PRODUCT being sold.


Massive Mal-Investment

Given the conclusions of the MIT study on actual deployments of AI, the flawed economics of the investments being made in AI in general and large-language models in particular become more apparent. The flaws emerge at multiple layers of abstraction. In the immediate term, a corporation may be willing to pay $200/month/person for subscriptions to ChatGPT for 100 developers to more quickly cobble together working prototypes of new code modules being developed. For developers making $150,000/year (about $72/hour), if that subscription saves the developer 3 hours of labor per month, it's paid for itself. However, ChatGP cannot architect an entire software SYSTEM integrating data feeds from ten different applications, normalizing the field names and data encodings, apply transformations, then write code to redistribute the data into separate databases for customer support, operations and analytics. Vendors proposing solution designs claiming to do this are already being rejected so the volume of $5 million or $20 million dollar projects being sold to justify adoption of AI does not exist.

If no vendors are succeeding at selling new solutions incorporating large-scale AI processing as part of multi-year contracts, why would investors in firms like Oracle, Microsoft, Google or Amazon continue to assume massive growth in hosted computing service revenue for actual end customers? Answering this question requires addressing two forms of existential risk to all of the firms participating in this bubble.

The first existential risk is purely related to timing. Investors in the markets can be tricked for brief periods of time but the greed cycle inevitably creates a "sure thing" dynamic from any company growing due to sheer speculation. That "sure thing" dynamic eventually destroys any patience in waiting for a long-term payoff as those most recently joining the party become the greediest and most highly leveraged in their bets. OpenAI cannot continue spending $100 billion or more yearly. If OpenAI continues to add hundreds of billions of dollars worth of capacity per year and, after multiple years, ninety percent of that compute is being consumed by more training of new releases of AI rather than USERS of those releases, that operating model will collapse.

OpenAI can only continue buying hardware and cloud services as long as someone is lending it money or investing equity through private offerings betting on a long-term success. As that success point moves continually into the future or becomes clearly visible as unattainable, the incoming dollars from lenders and stock speculation will vanish and the firm will collapse. The greater the mismatch between the traded value just prior to collapse and the remaining "intrinsic" value of the company, the greater the likelihood the engineers doing the real innovative design work will flee the collapsing company and go somewhere else, leaving OpenAI with nothing but a logo and empty cubicles. Their intellectual property will scatter to competitors overnight.

The second existential risk to these players lies with their technical strategy. It seems that at least ninety percent of all of this investment is being bet on a single technology – large language models. Within that LLM realm, OpenAI seems to be betting on improving performance through the sheer volume of training data ingested and available compute during training and ongoing operations. There are limits to the amount of human data available electronically that can provide "truth" to LLMs without duplication or contamination. Given that Open AI was trained in large part upon copyrighted material for which access rights were NOT legally obtained, it is likely OpenAI has already exhausted the pool of viable training data that will improve results.

In contrast, the first AI publicly launched in China by DeepSeek focused on improvements to feedback loops within training termed reinforcement learning. This approach reduces the compute resources needed by orders of magnitude, both for training and ongoing operations. DeepSeek's first system termed R1 released in January 2025 matched ChatGPT's then-current release while requiring a mere $6 million dollars in training hardware. That's $6 MILLION instead of $100 MILLION or $1 BILLION dollars for ChatGPT releases. That disparity isn't a mere shot over the proverbial bow from a foreign adversary. That disparity is a sign that continuing OpenAI's brute force approach for improving AI will bankrupt anyone sticking with them on that strategy. If the technology ever DOES work for businesses, those businesses won't stick with the vendor that blew through $100 billion or $500 billion perfecting it. They'll switch to a vendor that delivers the same result for pennies on the dollar.


Bubbles of Yesteryear

To better explain the risks of the current bubble, it's helpful to summarize the dynamics of the prior Internet bubble first for comparison. That "bubble" wasn't corrupt from the start, it evolved over multiple years and phases of market evolution and financial speculation. In a nutshell,

  • The bubble started circa 1993 with adoption of TCP/IP for large-scale networks
  • Adoption of HTTP protocol triggered invention of the Netscape browser, which simplified internet use for non-technical customers
  • Explosion in demand for dial-up triggered growth in Internet Service Providers (ISPs) who purchased access lines from legacy telcos (RBOCs) and upstart competitors (CLECs) enabled by the Telecommunications Act of 1996
  • Net-new demand for connectivity and duplicative demand from CLECs competing with RBOCs increased sales for telecom gear, driving up share prices and starting speculation as large firms like Lucent and Nortel Networks began reporting tremendous sales growth and profits
  • The first wave of jaw-dropping (but legitimate) growth with legacy gear makers expanded into newer startups selling newer generations of gear optimized for pure IP networks
  • Established gear makers pocketing profits hand over fist realize they can trigger more demand and sales by using profits to lend money to more competing CLECs and ISPs to continue building out networks
  • Circa 1997, cable companies began competing with telcos for internet customers using equipment that could exceed 56kbs dial-up speeds and provide always-on connectivitity, generating another wave of demand for core IP network gear
  • Startup gear makers seeing other internet firms capturing huge stock price jumps on IPO realize they can goose demand for their new products by offering customers pre-IPO shares allowing the customer to capture some of that wealth in exchange for signing purchases that become advertising justifying why the startup should spike on their IPO --- a recursive feedback loop

This final stage, in which gear makers goosed their own sales by subsidizing demand by lending customers money to buy their gear or giving executives of customer firms pre-IPO shares in their company to cash out and pocket, ran from about early 1998 to late 1999. Not EVERY deal during that period was rigged purely to manipulate growth figures for gear makers but a SIGNIFICANT number of deals struck during that period never had any legitimate business plan for achieving profitability. The startups had venture capital money, the gear makers wanted to feed their own exponential growth and had additional money to lend, everything during that period was growing to the moon, what could go wrong?

At least a few of the larger gear makers starting seeing through their own charade and halted use of the scheme by late 1999 but, by then, the damage had been done, resulting in overcapacity across the entire industry but especially in the realm of long-haul fiber networks and associated fiber optic gear driving that fiber. Reality started to emerge in 1Q2000 quarterly reports and stocks incurred their first big drop on April 3, 2000 when a negative antitrust ruling against Microsoft was issued, tanking its stock 15% and tanking the larger NASDAQ index dominated by technology stocks by 8% in a single day.

During that bubble era, the sizes of the deals struck between gear makers and customers were typically between $20 and $50 million dollars. Multiply sales of that magnitude across dozens of such deals and they still only amount to maybe $480 million over a year or two. Adjusting for inflation from 1999 to 2025, that estimate of the squandered "bubble" spending is roughly $1 billion dollars. There was more bubble spending in the software sector but this amount is sufficiently illustrative for this analysis.

Contrast that $1 billion dollars in froth and fraud to the sales figures in the news today involving OpenAI. These deals are worth TENS and HUNDREDS of BILLIONS of dollars. These deals for mere gear are MULTIPLES of the valuations that used to trigger antitrust reviews for entire mergers in the 1990s. Executives claiming that these current deals ARE worth orders of magnitude more than deals of yesteryear because AI is so much more foundational to the future economy are simply "talking their book…" They are making forward-looking statements about their firm's prospects because it serves their personal interest to do so.


Massive Deals and Revenue Obfuscation

Do plausible, rational, legal motivations for these recent monster deals exist? Absolutely. But so do other rationales, often not good.

Consider the two-way $100 billion dollar swap of graphics cards from AMD to OpenAI and OpenAI's agreement to buy $100 billion worth of AMD shares. It could be AMD is eager to sign up a high-volume multi-year deal to buy AMD-designed GPU cards so AMD can more rapidly progress through the learning curve of designing a card to compete with NVIDIA. But if $100 billion in value will be delivered from AMD to OpenAI, is AMD really getting ahead if it is giving up ten percent of its shares to pocket the $100 billion being traded? What could ALSO be happening here is that OpenAI is purchasing significant voting control over AMD so it can ensure its ideas and preferences for GPU design are reflected in future AMD products. AMD is agreeing to surrender that control in exchange for a giant sale that it thinks will boost its stock price and create wealth for its execs and shareholders. But this payoff depends on OpenAI buying all of those cards over this time period and not suddenly backing out of the volume commitment. At $20,000 per GPU, this deal also assumes some other AI software algorithm won't evolve that reduces the compute resources needed for a given level of improvement in model performance or that a competitor doesn't adopt such a model and make OpenAI completely irrelevant if they refuse to adapt.

Consider the $200 billion OpenAI to NVIDIA deal and the $100 billion NVIDIA to OpenAI deal. OpenAI's current processing design is clearly based upon NVIDIA's proprietary GPUs and larger server chassis infrastructure that allow multiple GPUs to be networked together for higher performance. If OpenAI knows it will be adding more of this exact same infrastructure in such huge quantities, it makes sense to sign a deal with that maker to essentially buy space on their factory calendar to ensure that capacity is devoted to building YOUR units versus units for your competitors. The fabrication plants making these GPUs cost billions and take months to turn up to production so smoothing the demand curve so the supplier can more carefully plan delivery makes sense.

So why is $100 billion coming back from NVIDIA into OpenAI in the form of an "investment?" Essentially, NVIDIA is accepting OpenAI shares as currency to settle this purchase deal. NVIDIA executives are gambling $100 billion in OpenAI shares NOW will be worth FAR MORE than $100 billion in two or three years. But this is where shareholders of both parties to the deal should have concerns.

First, it must be stated up front that OpenAI is not publicly traded. That means it isn't exactly clear how NVIDIA is "investing" in OpenAI and it isn't known to any outsider how the $100 billion valuation of that investment will be quantified. When you're talking about $100 billion dollars, that alone should be a red flag even if the parties involved are Mother Teresa and Honest Abe Lincoln. Second, if it isn't clear how the $100 billion investment is being quantified, it is equally impossible for NVIDIA shareholders to determine the price at which those GPU units were sold to OpenAI. As one hint to the implications involved with this deal, OpenAI just completed a private stock offering to raise more cash. That private sale collected about $6 billion for some quantity of shares which extrapolated into a market valuation of the entire company around $500 billion. If that metric is correct, NVIDIA is now holding a stake worth roughly twenty percent of OpenAI. If true, that is a REMARKABLE dilution of control and value for existing private OpenAI shareholders.

For illustrative purposes, assume the first deal was $200 billion for 10 million GPUs worth $20,000/each on the street. With no other deal and only cash coming from OpenAI to NVIDIA, shareholders of NVIDIA know they just collected $200 billion in revenue, unambiguously. GREAT.

If the $100 billion investment is added to the calculation, if $100 billion leaves NVIDIA's treasury in exchange for 200,000,000 shares priced at $500/share, then NVIDIA investors only collected $100 billion in cash from OpenAI and they have paper nominally worth $100 billion that they paid $100 billion in cash for. Theyre' actually still short $100 billion of the total $200 billion from the original sale. If the shares fail to grow in value over the next three years, NVIDIA shareholders have essentially lost $100 billion of collected revenue. If OpenAI shares double from $500 to $1000, then NVIDIA's treasure is now even and essentially has the full $200 billion collected from the original sale. If OpenAI stock grows to $1500, then NVIDIA comes out ahead, as if OpenAI overpaid for the GPUs, thus profiting from accepting a customer's stock in lieu of cash for sales.

But here's the problem with accepting customer stock for payments. If OpenAI shares drop, NVIDIA investors are looking at a different reality. In a market where they could have sold that $200 billion dollar pile of GPUs to nearly anyone and taken cash and faced zero "financing risk" with their customer, they instead collected something short of $200 billion. If OpenAI went completely bankrupt, NVIDIA would have surrendered $100 billion dollars in revenue by taking the chance of accepting stock in lieu of cash. But, but, but… you say.. Surely if OpenAI stock tanks, there will still be so much other AI demand for NVIDIA products, they will still be able to sell their inventory, right?

Not necessarily. The product involved starts at $20,000 per unit. Consumers cannot afford them, the units only operate within proprietary chassis units costing north of $50,000 and their power consumption and heat generation require special power and cooling logistics. If OpenAI's stock valuation drops precipitously, it will stem from the fact that competitors figured out how to replicate its capabilities for one half or one quarter or one tenth the cost. At that point, demand for these cards will drop industry wide simultaneously, jeopardizing revenue flows into NVIDIA across the board. It is NEVER a sure thing to bet on future revenue growth 2-3 years out in a segment where intellectual property can leapfrog billions in capital in a week. There's no safety in really large financial figures when really bad financial strategies are involved. (Go back and re-read the story about the new GB10 and GB200 models being released by Dell and others. The de-emphasis of supercomputer-scaled hardware is already underway.)

Stated more concisely, shareholders of a firm that decides to accept customer stock in lieu of cash payments need to carefully re-evaluate the nature of the business they WANTED to invest in and the nature of the business they are NOW actually invested in. A technology firm making expensive GPU cards and server network gear that decides to begin accepting stock certificates in lieu of cash from customers is no longer just a technology company. They are a technology company operating a bank, a credit bureau and a hedge fund on the side, exposing that firm's operations and its shareholder's investments to risks in disciplines that may be FAR beyond the ability of its technical leadership to understand and manage.

Consider the OpenAI and Oracle deal worth $300 billion between 2027 and 2032. It could be that OpenAI wants to diversify physical hosting away from current vendors Microsoft and Google. When you consume BILLIONS of dollars worth of a commoditized product like cloud hosting and storage, you are already scaled at each provider to the point they will never be able to achieve more efficiencies if you gave them ALL of your workload. By spreading that workload across multiple vendors, you are pitting them against each other every day in price, helping to keep your costs in check. That makes perfect sense. However, note that while this deal was ANNOUNCED in September, the deal was actually signed in private in July. When the deal was announced September 10, Oracle's stock price jumped from $241 to $328, a stunning 36 percent jump for a firm already carrying a capitalization of $687 billion dollars. How many people within Oracle and OpenAI had knowledge of this pending deal for nearly two months? Is it not conceivable that SOMEONE in one of those camps set up some purchases of Oracle stock in the weeks prior to the public announcement and essentially captured Oracle stock with a 26% discount? It's not like the Trump SEC is expending serious effort enforcing securities regulations.

A core concern shared across any of these deals involving equity stakes changing hands as currency is that these are not mere million dollar deals. These are deals worth tens or hundreds of billions of dollars. For publicly traded companies, they have no control over who buys up their public shares. If an activist shareholder or technology genius who thinks they know how to run the firm better than current management wants to begin buying up shares on the open market to accumulate a five or ten or twenty percent stake, there is nothing the CEO and the board can do to stop that. If the executives of the company and the board themselves decide to give away a ten percent share of the company to another party as part of a sales deal, those executives and board members are voluntarily surrendering a significant share of control to a single party, altering the power structure within the firm's governance. Doing that without the sharedholders at large agreeing to that as a business strategy seems fraught with moral and fiduciary risks.

If the wisdom behind this rat's nest of tangled transactions wasn't doubtful already, it's worth noting that one of the other investors in OpenAI's multi-year "Stargate" program to build out data center computing resources for AI processing is SoftBank. SoftBank also owns ARM which makes energy efficient CPU chips so SoftBank is hoping this $40 billion investment in OpenAI will drive demand and adoption for its ARM technology . However, SoftBank's track record for investments has some expensive miss-steps. It invested in WeWork, a textbook bubble stock which emerged from bankruptcy in 2024 but not before destroying $16 billion invested by SoftBank. Earlier it bought control of Sprint then starved it of capital and technical leadership during the migration to LTE networks, leaving Sprint crippled in the market place where it was then picked up by T-Mobile at a discount.


Macroeconomic Impacts

After decades of corruption and inefficiencies brought about by Gilded Age business practices, the Sherman Antitrust Act was enacted in 1890 to provide means for the US federal government to break up monopolies found guilty of abusing market power to reduce choices, cut supply and raise prices to maximize corporate profits at the expense of the larger society. As late as the 1990s, it was routine for firms in specific sectors already prone to monopolistic behavior such as telecommunications, media and energy to undergo anti-trust reviews any time mergers were considered or when the firms were accused of particularly aggregious market behavior. The dollar figures that typically triggered these reviews seem positively quaint now, even adjusted for inflation. In 1996, SBC's purchase of PacBell amounted to a $16.7 billion dollar deal ($33 billion in 2025) and was reviewed for over a year. Ten years later when SBC/AT&T then bought BellSouth in 2006, that deal was valued at $86 billion ($172 billion now) and still underwent 9 months of review.

In 2025, billion dollar mergers still get reviewed at a cursory level for particular partisan hot buttons but are typically approved after the appropriate interests are placated. However, mere sales transactions between firms seem to escape any review by regulators, regardless of the staggering dollar figures involved. That's a problem because the structure of these deals create numerous opportunities for fraud that harms shareholders and the general public alike at magnitudes unimaginable during the Internet bubble of 1997 to 2000.

For perspective, during World War II, the United States spent nearly $2 billion dollars (about $35 billion in 2025) over three years and numerous facilities across the country to develop the first nuclear weapons used to end the war. In 2025, we have contracts being signed between companies equal to nearly TEN TIMES the investment in the Manhattan project to develop capabilities initially controlled by private parties with zero review or control by the federal government. On the surface with only milliseconds of thought, that sounds like an inherently bad idea.

Any one of these factors operating by itself would be adequate justification for imposing regulations to better control how AI technology evolves and is applied throughout the economy. When the impact of these factors is viewed as a set, the macroeconomic impact becomes far more damning.

Several commentators across different platforms, including Steve Eisman who was one of the key investors depicted in The Big Short who spotted the looming 2008 financial crisis, have pointed out the following concerns:

  • The US economy has a GDP of roughly $29 trillion dollars
  • Best estimates for growth in the US economy for 2025 are about 1.8%, worth about $522 billion in a $29 trillion dollar economy
  • All of the announced spending by OpenAI, NVIDIA, AMD, Oracle and Amazon on AI in 2025 adds up to about $450 billion dollars
  • Which means virtually ALL growth in the entire US economy is essentially due to this AI spending
  • Which means nearly 100% of all of the country's growth and investment is being bet on a single technology (AI) in a single sector (information technology)
  • Which means virtually no other sectors in the economy are growing – growth is becoming impossible because AI is absorbing all investment capital from the economy

But wait, it's worse than that. This over-investment is artificially inflating public measures for economic health, most notably the S∓P 500 and NASDAQ stock indexes. Seven stocks in the S∓P 500 – NVIDIA, Microsoft, Apple, Alphabet, Amazon, Meta and Broadcom -- account for over fifty percent of all gains in the entire index. From a market capitalization standpoint, those same seven stocks are worth roughly __ of the entire index. Again note that OpenAI is not publicly traded. Its shares are being sold in private offerings, primarily to employees and existing investors. In the most recent private sale, the shares sold raised about $6 billion in cash and the price per share paid by investors in that offering equated to a total company valuation of nearly $500 billion dollars. But again, the firm notionally worth $500 billion is not publicly traded which means none of its internal accounting is discernable by external parties.

Why is this distortion of market indexes important? Here's a fun fact... In the first Internet bubble collapse, the failure of companies who borrowed millions to buy fiber gear from Nortel not only bankrupted Nortel, it nearly bankrupted the entire national pension system of Canada. Canada's pension system required it to invest in indexed funds that matched the Toronto Stock Exchange index which only held stock in firms located in Canada. As Nortel's stock rose from 1997 through 1999, the national pension fund was obligated to buy more and more shares of Nortel to maintain that weighting. This essentially became like a forest fire creating its own tornado force winds, drawing in more oxygen from a wider area to fuel the fire generated by funds which were REQUIRED to maintain that weighting.

At its peak in 2000, Nortel's capitalization was $398 billion, nearly a third of the entire Toronto Stock Exchange. By 2002, Nortel had collapsed to about $5 billion and limped for another seven years before declaring bankruptcy in 2009. That drop in value threatened the solvency of the entire pension system. In this AI bubble, the top seven stocks of the S&P 500 amount to nearly half the growth in value of the entire index. Seven out of five hundred. And six of those firms are part of this bubble. This is not a mere hypothetical worst-case possibility.

Any time buying behavior is automated by making a stock part of a widely held index, any tendency towards herd behavior in or out of a stock will become magnified. If the stock is one five hundreths of a completely equally weighted index, the impact of that maginifcation is impossible to detect. When the stock is worth more than a couple of percentage points of the index, those distortions from institutionalized purchases are material and can worsen an already spiraling bubble cycle – in both directions, up or down.




Now think back to the setup of this commentary. Imagine waking up one day to not just ONE but MULTIPLE commentaries and news stories describing a looming concern that involves not ONE company but MULTIPLE companies who are suddenly reporting huge shares of their revenue and profitability becoming dependent on deals with each other. Imagine all of this occuring in the context of a political and regulatory environment that is hellbent on actually PRODUCING chaos and uncertainty and encouraging its participants by engaging in equivalent forms of speculation.

Do you think these firms are overvalued?

Do you think the economy is being warped and damaged by this unprecedented level of mal-investment?

Is your portfolio or your career particularly exposed to these firms or possible fallout resulting from the collapse of this bubble?

These questions aren't just being asked by one reporter. They're being asked and addressed all over the interwebs. Read up on it now or read up on it after it happens but it seems very likely to happen.


WTH