Sunday, August 10, 2025

Foreshadowing in the Bond Market

The behavior of bonds as financial instruments and the mechanics of bond markets are among the most complex areas of finance and economics to understand or theorize about and explain to others. Bonds themselves are simple on the surface yet abstract many complex financial, economic and political fears and involve exponential mathematics to derive their value. This complexity makes stories about bond markets ratings poison to media and leaves most of the public completely clueless about how bond markets affect the larger economy and their individual finances. Most of the public that even follows stocks is clueless to the fact that the most commonly quoted factoids about the stock market's performance are grossly distorted. In the S&P500 index, forty percent of the index's total value is concentrated in TEN of the FIVE HUNDRED stocks in the index and most of them are tightly associated with AI technology which is in a bubble.

Current economic conditions make this inscrutability of bond markets to average citizens a significant danger. Bond markets not only reflect a larger share of total wealth than stocks ($55 trillion for bonds versus $49 for stocks) but the direct, exponential relationship between the value of bonds and inflation (feared and actual) means that bond markets act as a magnifier of the tiniest changes in direction in the economy, both domestically and worldwide. A review of events in a single week, the week of August 4, 2025, makes warning signs in the larger economy very clear and makes it very evident those dangers will trigger rising interest rates, credit contractions and a major threat to economic stability in the near term – months.

To step through this analysis, a short summary of the key events will be provided first, followed by some background on the mechanics of Treasury auctions for US debt then a broader review of dependencies that are lining up.


First, The News

Treasury Auctions -- The Treasury conducted normal sales of debt via routine on-line auctions the week of August 4, 2025 but market followers noted the results of several of the larger auctions were noticeably non-routine. Specifically, an auction on 8/6 selling $42 billion of 10-year notes and an auction on 8/7 selling $25 billion of 30-year bonds resulted in indicators of market interest and inflation expectations that suggested a much wider discrepancy than the Treasury expected. This was in spite of other economic news that normally would make the relative security of Treasury securities more attractive as an alternative for investors. This was likely due to the fact that August 7, 2025 also included an auction of a record $100 billion in 4-week bills following a $95 billion dollar auction of 4-week bills just the week prior.

Declining Home Sales -- A variety of banks and trade associations tied to housing have all issued public statements forecasting unit home sale volumes dropping to 30 year lows due to high mortgage rates while others publicly state home sales may drop even with zero mortgage interest rates because buyers simply cannot afford existing or new homes. In the same period, stories also came out indicating that prices of condominiums are declining in virtually all cities that experienced a price bubble over the past five years. Price declines are frequently hitting ten to twenty percent from peak prices a year ago.

Auto Market Statistics -- Statistics for new and used care sales are reflecting major affordability problems for consumers. Average statistics for price ($47,000), monthly payment ($745) and term (68.6 months) are concerning enough. Within those averages, many purchases are hitting the $70,000 and up range which is triggering even longer loans – 19.8% of new loans opted for terms 84 months or longer. Used car sales involve a much smaller share of purchases requiring loans (only 36.5% versus 80% of new car buyers) but those that did finance also opted for longer loan terms, averaging about the same as new cars at 67 months. Perhaps the ost ominous number is the share of car loans delinquent by 90 days or more. That number averages about 3.5% but is now around 4.99%. In general, lenders find that auto loans are the LAST bill that customers stop paying because a repossession of a car happens nearly immediately and no car means no ride to work which means no paycheck which jeopardizes everything.

For individuals following these stories about home sales and car sales amid a job market seemingly frozen amid layoffs by some of the most profitable firms on the planet, these should certainly trigger concern about events over the next few quarters. However, with a bit of insight into the mechanics of Treasury auctions and bond market psychology in general, the underlying reality seems even more concerning.


How the Treasury Auctions Debt

Most states have Constitutional language banning them from borrowing money for operations and requiring them to run with a balanced budget each year. The federal government obviously has no such restriction and has been required to raise copious amounts of cash on a nearly daily basis for routine operations. These sales of debt are over-simplified when covered in the media and likely leave most people with the impression that the Treasury puts new debt on a plate, puts the plate on a picnic table outside the Treasury, rings a bell and the world comes and buys up whatever the Treasury decided to sell at whatever price the Treasury chose to set. Given the attention paid to budget approvals and debt ceilings, Americans could be forgiven for also assuming the Treasury calculates each new year's deficit, then goes out and essentially takes out a single new loan for that new deficit and adds it to the stack of IOUs currently totaling $37.212 trillion dollars.

None of these assumptions are remotely true.

First, the Treasury sells new debt nearly every work day of every week throughout the year. A fiscal budget reflecting a $1 trillion dollar deficit doesn't mean that all $1 trillion of that deficit is needed immediately in cash, only over the course of the entire fiscal year. The Treasury divides up that new DEFICIT amount then factors in expected cash needs for upcoming bills, notes, bonds and coupon payments coming due and establishes its own internal schedule on when it wants to raise new cash via sales. Like individuals take out different loans over different terms for different purposes (car loans, home improvement loans, college loans, home mortgage loans), the Treasury sells bonds over a wide variety of time maturities such as 4, 6, 8, 13, 17, 26 (bills) and 52 weeks, 2, 3, 5, 7 and 10 years (notes) and 20 and 30 years (bonds). (Note: for clarity henceforth, all of these will be referred to as "bonds."). This gives the Treasury the flexibility to adjust future cash demands by altering maturity terms, choosing how much debt to pay off versus roll over and allows the Treasury to take advantage of fluctuating interest rates when they change to the Treasury's advantage.

For each individual sale of debt, the Treasury does not dictate the price of the bonds. Instead, it uses a process called a Dutch auction to sell new debt to the public. In a Dutch auction, the seller pits all potential buyers against each other by providing notice of the sale date, the total aggregate value of the sale and the number of bonds being sold. Bonds typically have $1000 face value denominations so a sale of $10 billion of debt would involve 10 million bonds. So that means the bond price is $1000 right? Wrong.

Because bonds are a debt instrument involving a promise to pay the face (par) value back on some future fixed date, the "price" of the bond is discussed with two related metrics – either its current price (which reflects the net present value of its interim coupon payments and the final payoff from the Treasury at maturity) or its yield (the effective interest rate used in the NPV calculation that results in the bond's current price).

When the Treasury opens the auction, anyone (literally anyone – primary dealers, large banks, investment firms and individual investors) can view the offering and submit a bit for a portion of the offering. In theory, each bidder evaluates the size and maturity term of the offering, examines interest rates in the rest of the market from other borrowers, looks at the coupon rate offered by the Treasury's new offer, then calculates a price they believe adequately compensates them for the risk they are taking by lending the government that much money at that rate over that term. They then submit their bid for X units at P price. The bidding is typically open online for one hour then the Treasury closes the bidding then sorts all of the bids from highest price to lowest price, accumulating the total number of bonds reflected in each bid. When a bid is reached whose X unit count adds up to reach the total quantity of bonds in the offer, the price of THAT bid becomes EVERY bidder's price and the Treasury then finalizes the sale. If that last bid was for $851.23 on a $10 billion auction of $1000 bonds with 4% coupon payments every six months, the Treasury nets $8.512 billion dollars from the sale while adding $10 billion to the nominal debt. That price of $851.23 reflects a rough consensus that buyers expect a real yield of 6% on that 10-year bond over its life.

With that background into the auction process, statistics resulting from that process and the psychology market watchers derive from those statistics can be explained.


Mapping Auction Statistics to Psychology

First, as stated earlier, the Treasury conducts auctions of new debt nearly every work day of every week and the amounts and terms of these bonds being issued vary widely based upon economic conditions. These statistics have been collected for DECADES and thus have been found to fall within certain bounds under all but the most unpredictable circumstances such as the September 11, 2001 attacks or specific days within the 2008 financial crisis. In short, professionals have a clear idea of what "normal" is for these auctions.

Second, it is truly the buyers and not the Treasury that set the price of each auction that dictates how much actual cash the Treasury will collect when closing the sale. The Treasury chooses the nominal dollar value of the entire issue and chooses the coupon rate paid on longer term notes and bonds but the BUYERS ultimately do their own calculations based on their own assumptions about inflation and the risk of default by the Treasury versus other alternative investments and use that to set their bid price. This has two key implications.

  • If the Treasury's coupon rate is materially lower than expected inflation, bid prices will likely be significantly lower than face value, drastically reducing the net cash raised by the sale. This acts as a sign that the market does not agree with the Treasury's expectations or wishful thinking about inflation and interest rates and demands a higher premium. Since the Treasury dictates the coupon rate, the only way bidders have to collect a higher yield is to lower the price being bid for the bond.
  • If bidders bid prices significantly higher than the par value, that historically has been a sign of relative fear in markets, either a presumption that Treasury debt is more secure than corporate debt or leaving cash invested in stocks.

Third, buyers can not only signal disinterest in a particular issue by low-bidding on the offer, they can also just NOT BID. When the volume of bidders (or the net value of their bids as a fraction of the total sale) drops precipitously, inevitably lower sale prices will result. In theory it is possible for a Dutch auction to "fail", meaning the total value of all of the bids doesn't even equal the quantity of debt being sold (e.g. all bids only add up to $9.1 billion on a $10 billion auction). In reality, this doesn't happen because the larger bond market includes institutions given special privileges in exchange for acting as "primary dealers" which obligates them to act as market makers to provide liquidity for situations where demand does not equal supply. These primary dealers are required to step in and bid for any remaining bonds if bids haven't totaled up to the sale quantity.

Here are the key statistics gathered from Treasury auctions that are watched by professionals:

Bid-to-cover Ratio -- This is a ratio reflecting the extent to which bids exceeded the total sale amount offered for sale by the Treasury. Bids typically exceed the nominal sale amount by many multiples so lower numbers reflect a lack of interest in that offer which may indicate larger concerns about government policies regarding debt levels, inflation, monetary policy, etc.

Yield -- This is the effective return that will be earned by the buyer based upon the sale price, coupon rate and term. Bidders want this to be as high as possible (reflected by a lower sale price) while the Treasury wants it to be as low as possible (reflected by a higher sale price).

Allocation -- This statistic summarizes the share of the total offering bought up by different categories of buyers including primary dealers, direct bidders trading for their own benefit and indirect bidders acting as buyers for foreign entities. Ideally, demand is high enough that primary dealers never need to act in their market maker role so 100 percent of the sale maps to direct and indirect bidders. As the share of the sale purchased by primary dealers goes up, concerns rise as well about liquidity and the attractiveness of federal government debt.

Tail / Through - When a particular issue first opens for bids, the Treasury publishes prices of those bids so everyone can get an idea of the general sentiment in the market for that offer. When the bidding is closed and a final price point identified, any drop between that initial opening price and the final price is termed a tail. Similarly, if the final price is higher than the opening price, that difference is termed a Through. (NOTE: Conceptually, this perception gap could be reported as the +/- delta from the opening price versus the final resulting price OR the +/- delta in the yield of the bond. In practice, Tail / Through is reported in terms of the yield with units of basis points" with a basis point equal to 1 percent of 1 percent – EXCEEDINGLY SMALL values.) These Tail / Through statistics provide another indication of the relative strength or weakness in demand for a particular issue based on the price levels being bid during the auction. Small values are essentially immaterial random numbers but large values can be problematic. Large Tails reflect unusually weak demand which can thus reflect concerns about either the security of the issue or its yield being insufficient for current market conditions. Large Throughs can reflect unexpected interest in divesting from other corporate bonds or stocks in favor of Treasuries as a "lifeboat" for future expected financial turmoil.


Re-examining These News Stories

With this background in Treasury auction mechanics and psychology, the news regarding recent auctions can be placed in a more understandable context. Remember those highlights:

  • Auction on August 6, 2025 of $42 billion of 10-year notes
  • Auction on August 7, 2025 of $25 billion of 30-year bonds
  • Auction on August 7, 2025 of a record $100 billion of 4-week bills
Based on the statistics of those three auctions, what did larger markets think about these auctions?

For the $42 billion dollar 10-year auction, the bid-to-cover was quite low, around 2.35 and the resulting interest rate on 10-year notes rose 2 basis points from 4.20% to 4.22%. The "tail" on the auction was 0.9 basis points. The allocation showed primary dealers purchased 16.2% of the issue, up from an average of 14.2%.

For the $25 billion dollar auction of 30-year bonds, the bid-to-cover was again low at 2.27 and the resulting yield on 30 year bonds rose to 4.813%. The "tail" was 2.1 basis points. The allocation showed primary dealers purchased 17.5% of the issue.

For the $100 billion dollar auction of 4-week bills, The tail was 0.5 basis points – notable since interest rate expectations over extremely short periods of four weeks should be far easier to predict than 10 and 30 year bonds. The allocation showed primary dealers purchased 32.1% of the offering. (The share sold to primary dealers is not consistent across maturities so this statistic can only be compared to shares in auctions of identical terms.)

For reference, here is what the official Treasury news release looks like for a completed auction, using this $100 billion auction on August 7, 2025 as an example:

https://www.treasurydirect.gov/instit/annceresult/press/preanre/2025/R_20250807_1.pdf

In general, reaction in the markets was quite pessimistic, as evidenced by the relatively high share of bonds purchased by primary dealers and given the significant mismatch between Treasury expectations of acceptable interest rates and expectations by bidders as reflected in the tail amounts. Since a basis point is essentially a percentage point of a percentage point, these "basis point" spreads seem insignificant. However, the Treasury incorporates current interest rates as reflected by prices of existing bonds of similar maturities when setting coupon rates of new issues. Therefore, when actual bids come in with LOWER prices (which reflect a demand for HIGHER interest rates and yields), tails on new bond issues act as a warning sign from markets to the Treasury that the market sees growing risk it expects the Treasury to reflect.

Bond market watchers who follow these auctions analyze these statistics and try to boil them down to a "grade" reflecting how well the Treasury's offering met expectations in the market while meeting its immediate need for cash. The grades assigned to these auctions were consistently low – D.

What is driving the disconnect between the Treasury and bond market watchers? The watchers believe the Treasury is assuming it will succeed at jawboning the Federal Reserve Bank in lowering interest rates. That is why it is rolling over so much maturing debt into such short terms. It assumes rates will be LOWER in just a few weeks allowing that debt to be rolled over AGAIN at lower rates. The Treasury may believe this but the market clearly does NOT and is concerned the government is destabilizing its cash flows by placing such huge bets on short term interest rates that can change rapidly in EITHER direction. One can extrapolate one layer deeper into fear and theorize the larger market is additionally concerned by this shift to short term debt because this shift limits the government's ability to raise ADDITIONAL emergency cash should something REALLY bad happen, like, oh… I dunno… Maybe a stock market crash, another banking meltdown or a natural disaster. These fears are completely rational. The federal government is betting everything goes right and nothing goes wrong anywhere in the economy while pissing on allies and prior trade partners and adding trillions in additional debt.

So if the take from professionals on the lending side of the credit market is tending pessimistic, how will that affect consumer borrowers and the larger economy?

First, it's notable that the expected (feared?) square wave jump in tariffs and prices didn't happen in lock step with the initial threat of new tariffs on April 1, 2025. Many of the largest tariff rate jumps were deferred for weeks then months as the Trump Regime promised looming "deals" that might be lower than the originally threatened rates. As a result, second quarter economic results did not reflect skyrocketing tariffs – only drops in tourism and the value in futures contracts for farmers. But note that despite these delays, job growth in America has already tanked in 2Q2025. Since July 1, some tariffs – notably those for Japan, Mexico and Canada – HAVE been officially set and are beginning to affect prices. Now things officially get interesting.

One way of thinking of the feedback cycle that will become magnified over the next few weeks and months is to think of this series of input changes rippling through the economy in a loop:

  • reduced demand from tariffs finally kicking in in key consumer sectors
  • tariff costs split between corporations (as reduced margins and profits) and consumers (higher prices)
  • lower profits triggering "earnings surprises" in stocks priced for perfection, driving falling stock prices
  • higher prices triggering further reductions in consumer demand
  • reduced demand triggering additional job reductions, further reducing household income
  • higher unemployment increasing credit defaults on cars and homes
  • any negative surprises in the stock or bond market due to price / rate fluctuations raising the likelihood of derivative based faults that trigger larger and larger surprises

With that relatively generic feedback cycle in mind, some examples can be examined in more detail.


EXAMPLE: The Automotive Market

It is very odd to see stories that both Ford and General Motors recorded surprisingly high unit sales and revenue for the second-quarter period of 2025. That wouldn't seem to indicate consumers are unwilling to buy new vehicles or obtain loans for them. However, financial results for car MAKERS have to be read with a healthy dose of skepticism. The "sales" reported in quarterly earnings of the MAKERS are sales from the MAKER to its DEALERS. They are not unit sales from dealers to customers. While Ford and GM reported high sales, inventories on their dealer lots are ranging from 90-120 days. That means given their average sales volume over the prior 30 days, the number of cars remaining on their lot would last those dealers for another 90-120 days of sales, even if they don't accept a single additional car from the maker.

With tariffs now in effect, car MAKERS would like to raise prices to maintain their same margins while eating the additional cost of the tariffs. If their dealers had NO cars on the lot and consumers clamoring to buy up any new vehicle sight unseen, makers might be in a position to do that. When the dealers have nearly 4 months of unsold inventory of already high prices, they will not be able to accept even HIGHER priced vehicles and move them without drastically lowering prices on the existing unsold units. That will drive down ALL prices, including used car prices, sticking dealers with massive losses.

At a minimum, this will likely trigger the bankruptcy of hundreds of dealers across the country, spiking unemployment. If auto makers properly recognize the situation and STOP MAKING NEW VEHICLES until demand eats through inventory, this will trigger a spike in unemployment among assembly workers and any workers in US parts manufacturers used in the cars not selling or being made.


EXAMPLE: The Housing Market

The housing market reflects many unique pathologies of its own making beyond its obvious susceptibility to interest rates. Housing construction has not kept up with the growth in "households" since 2006 and prices in many markets have been driven up by investment firms buying up standalone homes and condominiums as investments. Despite public support for "affordable housing", most communities continue enforcing and enhancing zoning policies that make the most cost-effective modes of housing illegal for fear of driving down existing home values – either due to density or the characteristics of the residents who might buy the units.

As of 2025, the housing sector seems to be destined for a shock. August reports for new home starts showed a 9.6% jump to an annualized rate of 1.36 million units. Unfortunately, at the same time, housing lenders and industry trade groups issued reports forecasting home sale volumes dropping to 30-year lows due to high mortgage rates. Other stories have further elaborated that affordability is so poor and buyers so strapped for cash flow that home sale volumes may drop even if mortgage rates dropped to zero.

Lower prices could solve some of these problems but pose their own dangers to individual owners and the larger market. Trade press and local news stories are reporting "bubble" areas like Miami and Austin are seeing significant price drops of between ten and twenty percent from highs in 2022. For those existing owners, this could trigger a financial crunch if they want to change jobs and relocate or lose their current job providing the income to make the mortgage payment. For relocations, even if the employer eats the loss, that home sale will still drive down "comps" for nearby homes and apply downward pressure in the local market. Owners needing to sell after losing a job will also drive down "comps" but will also immediately realize a large cash loss they may already be unable to afford due to the lost of a job. In bubble areas, these losses might range from $50,000 to $100,000 for an "average" home. Most households do NOT have the wherewithal to eat a $50,000 loss and still come up with cash to plow into a new home. Such losses will likely result in a downshift into a lower standard of living and significantly less spending. This shrinkage in spending ripples through the larger economy as another round of contraction in demand.

The higher share of homes owned by financial investors further complicates dynamics in the housing market. If one hundred percent of homes were owned by their occupants, a downturn in market prices for one or two years might lead those occupants to stay put rather than eat a drop in equity unless they absolutely HAVE to relocate. When an investor or hedge fund owns hundreds of units of single-family homes or condominiums, many of which might be vacant, there's no stay / move decision to make, only a keep / sell decision. These institutional owners may be more eager to minimize losses by dumping properties before they become too far underwater. This can add downward momentum to prices which can trigger more institutional selling and accelerate the downward spiral.

One final timing consideration with the housing market merits consideration. The NIMBY zoning restrictions affecting construction across the country, limits on trade labor for construction, spikes in construction commodity prices during the pandemic and a boom in high tech workers relocating after being promised perpetual work from home arrangements resulted in a particularly insidious set of economic circumstances:

  • Most homes constructed over the last five years have above average prices and are for larger homes
  • Most of these homes were overpriced due to the premiums paid for labor and supplies as the construction business sorted out pandemic problems
  • A significant share of these newer homes in "bubble" markets such as Austin were built for employees who moved from even more expensive cities in California or New York. These buyers likely had significant cash extracted from their prior expensive home to plow into a cheaper but still expensive new home.
  • Given the high prices paid for many of these homes, the ownership is likely skewed towards white-collar professions – technology in particular – that are being impacted by current layoffs

In this line of thinking, it may very well be the case that those most likely to be squeezed by a contraction in housing prices are homeowners who recently relocated and are most likely to lose their job in the coming months, triggering a personal financial crisis that was unthinkable only a year ago.


The net-net of this recent bond market news is that professionals in the bond market are clearly worried about the sanity of Treasury strategies for managing the massive debt of the United States and seem worried this swing to short term financing will leave the federal government with nothing in the tank to counteract any other economic shock that might hit. The negative feedback loops triggered by the disastrous tariff strategies imposed by the Trump Regime ARE beginning to exhibit themselves in the demand side of the economy and seem poised to ripple into the supply and employment components immediately. And the apparatchiks in charge are pursuing strategies which no one in the larger financial markets would recommend.


WTH

Saturday, August 09, 2025

The Great Flattening

The combination of Artificial Intelligence tools, continued layoffs in IT fields and reduced opportunities for recently laid off workers and new college graduates has resulted in a flood of news stories and online commentary that seem to be converging on a common term for the phenomena -- The Great Flattening. In this simplified narrative, the job market is being squeezed because corporations are concluding there is a significant swath of middle management labor previously providing "analysis" and "tracking" and "forecasting" required by upper management that is no longer needed or can be done via AI based automation.

That's a concise summary. It fits into one paragraph to match the attention span of modern readers. It even gels with other popular narratives helping to hype AI. Unfortunately, this explanation is missing crucial details that provide a more complete picture on the causes of this trend which means this explanation does little to help society understand what can be done (if anything) and doesn't help individuals who might be affected or already have been affected to prepare for what comes next.

What's wrong with this simplistic "great flattening" theory? It fails to reflect a unique combination of regulatory failure in the economy (the American and worldwide economies alike) and a perpetual disconnect between the theory of business operations and the actual power politics of large corporations. Those two forces are creating a discontinuity in staffing requirements that is unique in its SIZE but not its NATURE that happens to be coinciding with the advent of new AI technologies that are benefiting from a lack of regulation while promising to enable additional labor savings in many of the most expensive labor categories within large corporations.


Business School Theory

In efficient economies, efficient companies led by professional managers continuously monitor the entire business environment and examine market demand, willingness to pay, technology that allows trade-offs between labor and automation and the status of competitors. These observations are then used to make alterations to a company's products and the processes used to make them, including choices between investments in labor and training versus technology and automation. An efficiently operated firm that knows demand for its product(s) equates to X when it only has labor to produce 75% of X can choose to a) spend more money on overtime to meet demand with existing labor supply, b) hire more workers, c) adopt new technology that increases output without requiring more labor or d) do nothing and cede market share to competitors.

In theory, a professional manager will choose (a) (overtime) if demand is only thought to be temporarily high or seasonal. In theory, a professional manager seeing a permanent increase in demand would analyze the productivity trade-off between labor and capital equipment then pick a mix that would increase capacity to the full value of X.

p>Conversely, the same firm with capacity for X units facing demand for only 75% of X also faces choices – a) keep producing X units with the current labor and build unsold inventory, b) only produce 0.75X units with the current labor force to avoid costs for extra materials and unsold inventory, c) reduce the labor force to the level required to only produce 0.75X units to save money on both labor and materials and avoid excess inventory.

In theory, a professional manager would only choose (a) (no changes) if contractually bound to continue buying supplies, etc. Option (b) would be chosen if the shortfall in demand was viewed as temporary or seasonal and the cost of idling workers or retraining replacements exceeded labor savings. Option (c) would be chosen if the business recognized the shortfall in demand as persistent or long term, in which case any other choice is just delaying an inevitable reckoning of unprofitability.

In short, in both scenarios for market growth and market shrinkage, a professional manager should be attempting to continuously monitor the need for labor and making consistent (monthly? Quarterly? Yearly?) adjustments to staffing levels to meet output demand. Visually, a company's headcount should theoretically exhibit a stairstep pattern that remains tightly correlated over time to the output of the company's processes. Something like this:

That's why MBA students take classes in managerial accounting and operations management, right? Perhaps those MBA students need a better class in Human Behavior and Organizational Design. What actually happens in many (most?) corporations is vastly different.


Corporate Reality

This process of continual optimization theorized by business school curriculums (and suggested by common sense) is distorted or short-circuited entirely in large corporations by a consistent set of human behaviors:

  • Middle management is reluctant to share early indications of future bad news with senior executives. Senior execs don't want to hear bad news or excuses, only results, no matter how absurd the goals may be or bad the external environment might be.
  • Senior executives are often reluctant to act upon legitimate bad news after hearing it, either for fear of consequences from a board or a belief they can achieve impossible results by sheer force of their will (see The Steve Jobs Reality Distortion Field).
  • People in organizations are prone to empire building. Headcount is equated with influence and increasing headcount is frequently a requirement for promotion to higher titles and pay so virtually no manager is going to VOLUNTEER to surrender headcount (even after a voluntary exit) as "unneeded".
  • Senior executives often protect their turf at the expense of other organizations. When a company reaches a point where wholesale cuts are required, many executives will argue their department is different and is tied to revenue or "customer experience" and that cuts need to come from elsewhere. Anywhere but my department. I run a tight ship, my department is perfectly sized, everyone else is bloated and inefficient.
  • Seemingly continuous "re-organizations" every time an executive role changes hands which result in responsibilities shifting to new leaders who don't understand the roles operating under them. This often produces one of two opposite but equally harmful problems. It often yields a situation where the new leader doesn't recognize a newly inherited function is over-staffed for current needs and thus allows its bloat to go uncorrected. It can also yield a situation where the new leader accepts a new responsibility without some or all of its current headcount. This ensures their new team will be perpetually overworked, further heightening middle managers' reluctance to let go of headcount without a gun to their head.

All of these human behavior traits lead to a consistent, inevitable result. Instead of headcount levels closely synchronizing with production demand as taught in school, headcount levels consistently trail increasing demand and are adjusted even less frequently on the downside of any demand curve. Instead of relatively small incremental adjustments that might actually line up with normal job churn attrition over the course of a year, needed reductions queue up and become mass layoffs, dumping a much larger set of similar workers into the job market at the same time, causing more difficulty in obtaining new employment. Visually, the result looks like this:

Regulatory Failure

All of the behaviors described previously take place in any significantly large company with hundreds or thousands of employees. Obviously, looming financial problems are harder to ignore for smaller companies lacking the financial inertia of firms with millions or billions in revenue but the principals at work are identical. A unique factor in the current environment is many of the most notable corporations tied to large layoffs appear to be among the most profitable firms in the economy. One immediate response to that observation is SEE? That's why these companies are so profitable… They are immediately leveraging new technologies and laying off unneeded workers the second they are no longer needed. Isn't that what business school theory says they SHOULD be doing?

Excellent counterpoint.

Theoretically, that counterpoint might be valid in some cases. However, the biggest firms tied to this "flattening" share at least some non-coincidental traits:

  • They develop software for core AI algorithms
  • They develop hardware optimized to execute AI algorithms at vast scales
  • They sell "compute" (processing power, storage and network connectivity) required to develop and operate AI systems at vast scales
  • Their AI development work has violated copyrights and intellectual property rights of literally MILLIONS of individuals around the world.
  • Their EXISTING online platforms SHOULD require vast amounts of human labor to accurately / fairly enforce copyright, intellectual property rights and CSAM (Child Sexual Abuse Material) protections yet NONE of these firms meaningfully handle these responsibilities, saving themselves billions in costs while creating a wild west online environment.
  • Their NEW online AI platforms should ALSO be requiring vast amounts of human labor to properly test these systems for proper guardrails yet NONE of these firms have devoted meaningful resources for properly testing this unproven technology – the entire world is beta testing these technologies in the real world.

Underpinning all of these factors is that many of these existing firms (Google, Microsoft, Meta/Facebook, Amazon, Apple) are gargantuan in every measure of power. Meta is currently the smallest of these firms and its market capitalization is $1.941 trillion dollars. Microsoft is currently the largest among this group by market capitalization at $3.9 trillion dollars. Nvidia is currently THE largest corporation by market capitalization at a staggering $4.5 trillion dollars but they have NOT engaged in mass layoffs… at least yet.

It is absolutely the case that there are characteristics of the computer software and hardware industries that provide economies of scale that make operating these types of businesses at these scales extremely profitable. However, there is nothing unique about the computer software and hardware industries that negate lessons learned over centuries about the harm done to society by monopolies. In every generation, in every economy, in every sector, monopolies reduce supply, raise prices, limit choice and stifle innovation. EVERY. TIME.

The unique aspects of these firms and their line of business do not mitigate these damages, they make them far worse. By limiting choice and innovation in the functionality of systems used as mass media for news, marketing and personal communication, firms operating at these scales are creating distinctly high levels of damage to the societies in which they operate. Damage which should have been corrected ten to fifteen years into their existence through proper enforcement of anti-trust laws already on the books.

So why are these tech giants making such large staffing cuts amid record profits? Quite simply, because ongoing operation as monopolies with little meaningful correcting influence by the government has trained them to believe they can continue developing more software and hardware used for ever more critical purposes with less quality control and continue to enjoy all of the upside. All of the downside stemming from poor, un-innovative design and non-existent quality control becomes an externality applied to customers who pay extra on patching buggy systems or buying even more software automation to monitor and correct security problems produced by these products. And all of the traffic that used to arrive on web sites looking for content or product information simply disappears gradually or all at once as AI summaries make click-through to original content pointless. How can a small company prove a giant Goliath stole a click that never arrived? In such an insular environment, there's no incentive to retain extra staff beyond what's required to deploy the next buggy release. As Martin Weir of Get Shorty might have asked… What's my motivation (to do anything different)?


Back to the Larger Great Flattening

So if these tech giants already making oodles of money off AI technology are just obeying their monopolistic instincts, what is motivating companies in the larger economy? Again, the motivations affecting these big tech firms are only unique in their MAGNITUDE, not their KIND. All of the human behavior patterns discussed previously take place in every company. All of those behaviors create lag effects that build up over decades and, like water, become invisible to the fish swimming in the corporate environment. So why do these job reductions seem to be concentrated on the vaunted "knowledge workers" of only a decade ago?

Multiple reasons...

First, by definition "knowledge worker" tasks such as software coding, software testing, software requirements writing, budget analysis, etc. all involve a great deal of reading, writing and summarization and 100% of it is done electronically in Word documents, emails and spreadsheets. This is the perfect work product for slurping into an AI training cycle and entering into an AI prompt for new content since the source "knowledge" and "ask" are all in text format suitable for processing with Large Language Model based tools. In contrast, it could be forever before AI systems come after the jobs of carpenters, roofers, plumbers, barbers, dentists and surgeons. It will prove impractical to devise a robotic system that can perform a task which is intensive for both physicality and skill. Fast food jobs are different because they are highly specialized and simplified to require virtually zero training or physical strength / skill.

Second, as the innovations of the first Internet technology wave of 1996 to 2004 took root in corporations, most recognized the need to redesign corporate systems for commerce, internal production and order fulfillment which triggered a stream of internal software development projects to design customer portal websites, internal customer service agent tools, etc. Applications which may have only started with the need to serve up simple pages to hundreds or thousands of users per day rapidly grew to where the system needed to support thousands of users per minute. These development efforts required more formal design and planning to keep them remotely near their budgets and delivery dates. This not only resulted in hiring more developers and testers but people with new-fangled responsibilities manufactured as part of new software development processes that promised to solve all of the productivity and quality problems that exploded with this crush of work. Few of those new methodologies worked as claimed yet the new functions became ingrained in headcount charts across nearly every corporation.

When the second "big data" technology wave arrived between 2004 and 2016, all of these existing poorly implemented systems developed in the first wave often underwent "enhancements" to tie them together with real time feeds or leverage massive data repositories to automate customer service inquiries and trouble diagnostics. This work required even more rounds of development with all of these new software development lifecycle (SDLC) processes requiring even MORE of these workers who weren't actually tied to core development or testing of code. An application whose original iteration in 2000 might have taken 15-20 people to code, test and deploy over 9 months might now be assigned 75-100 people over 15-18 months to refactor for supporting iPhone, Android and web views. Curiously, the count of actual CODERS and TESTERS might still only be 20-25 people.

In short, software development processes in most corporations are incredibly bloated with headcount that never touches a line of code during development or testing and has no involvement with deploying, running and monitoring the system in production. Within development circles, the term "10x developer" is commonly thrown around when talking about "productivity." Productivity for developers is inherently difficult to quantify consistently / fairly – is "more source code" for a given problem "more" productive than less source code? NO. Is faster calendar delivery of code better than slower calendar delivery of code? NOT NECESSARILY. In general, this "10x developer" concept describes a pattern seen in nearly every large development organization. A pattern where it seems like in a team of twenty developers, a small subset, maybe only four – seem to do 80% of all of the development of code that reaches production. This is seldom quantified or proven but based on decades of experience, the SENSE of this definitely does ring true for actual coding work. When applied to the larger collection of headcount AROUND core development but not DOING core development, it is ABSOLUTELY the case. This means that there is likely a large number of people who describe their role as related to software development who actually never write a single line of SQL, C#, Java or Python yet count themselves as developers when laid off. They are not.

Lastly, this "overhang" of dead weight in these indirect job functions related to software projects have gone uncorrected over the last twenty years precisely because of the empire building and turf-protecting patterns described previously. Many of these roles were not added to the managers operating actual development teams but separate "project management" teams operating in parallel within the development organization. That organizational chart design choice immediately destroyed any incentive to eliminate job functions that had no demonstrable effect on productivity or quality and instead created a new turf to be defended and even expanded to justify the existence of new chains of management. Ask any person employed in an IT organization that has adopted "Agile" development in the last decade and they will confirm this phenomena.

As a final factor acting to cement this inefficiency into organizations, the rapid evolution of software architectures has virtually guaranteed any mid-level technical managers have zero intuition for appropriate technical designs and required development time for current applications. Architectures have evolved from mainframe apps in the 1960s and 1970s to isolated desktop apps in the 1980s to client/server applications in the early 1990s to server-based web portals from roughly 1995 to 2010 to browser / smartphone centric apps from 2015 to the present. Any mid-level or senior-level IT leader has experience that is likely two generations out of date, making it very difficult for them to argue against the continued use of these inefficient practices that contribute to so much headcount bloat. They may recognize it intuitively but they are unable to articulate the problem effectively to convince everyone else to abandon a failing methodology.

The net-net of all of the above is that IT roles in particular seem uniquely primed for cutbacks even without Artificial Intelligence solutions promising productivity improvements. Artificial Intelligence merely provides an easy EXTERNAL excuse to make these giant corrections without existing management having to state these inefficiencies were present all along and should have been identified and eliminated over the last ten to fifteen years.

For functions outside IT like financial analysis, the rationale might be similar but not as exaggerated. Large corporations typically create special budget analysis teams that operate in parallel with large departments like IT, Customer Service, Manufacturing, etc and use data collected from payroll and ERP systems to track departmental spending against budgets on a weekly basis. At my former employer, this continuous true-up process was a joke and a nightmare because data quality in the source systems was sketchy and the analysis and true-ups were performed in Excel (in a department spending $100 million per year in expense and $50+ million in capital). At some level, the entire exercise was even more pointless because at any point in the year, approved capital dollars might be reduced or eliminated for a project with zero notice, requiring panicked rebalancing and reprioritization of work. If reports confirmed a project was going to materially overspend but the project was tied to a pet executive goal, funds WOULD be found, making the time spent truing up the data by mid-level managers worthless. In these types of environments, it's possible this brain damage could be done equally well by AI based tools that could skim the source data and produce the same summary Excel spreadsheets and PowerPoint bullet summaries. The analysis was already consistently flawed, why not let AI do it and let five budgeteers per department go?


Implications for the Future Job Market

Any analysis of future impacts from this wave of layoffs must address three different pools of workers which will be termed here 1) core IT workers, 2) near-IT workers, 3) analysis / reporting workers, 4) entry level workers.

For core IT workers directly writing code or testing code or peforming work that directly affects functionality being built, there will likely be continued pressure on eliminating roles in this area until the longer life-cycle impacts of involving AI systems in development become known. Experts in development have already commented that AI excels at developing SMALL bits of STANDALONE code and can outperform most developers at that function. This same analysis has confirmed that AI is exceedingly poor at developing small code changes to EXISTING complicated systems or designing and creating code for LARGE systems integrated to multiple other systems. That's no guarantee that an executive hearing an external vendor or consultant whisper in their ear about how they can rewrite all of your systems with an AI for a mere $4 million won't take that bet and try it. However, it seems highly unlikely AI will be able to replace compentent core developers doing real integration work in a typical corporate "enterprise" setting in the next five to ten years.

For "near-IT" workers, those working in proximity to actual development work and trying to translate internal development mumbo jump like stories, sprints, epics, roadmaps, backlog, burndown, retrospectives, etc. into upper management speak, the attempt to adopt AI for that reporting could likely be the straw that breaks the back of the camel known as agile and leads management to abandon most of this noise. If this is what you do for a living, it is likely the number of openings for this work will decline rapidly, meriting a pivot to other work.

For analysis / reporting workers, the impact of AI on these types of reporting roles is likely to be similar but smaller in magnitude than the impact forecasted for "near-IT" workers. Simply put, most of the reporting and detail tracked by near-IT workers made sense to few inside IT and made ZERO sense to anyone outside IT including clients who relied on IT to build their systems. Financial and operational reporting is different. It can have material impacts on processes that affect publicly reported numbers on quarterly statements and – let's face it – MONEY means much more to any executive than technical gobbledygook about "development sprints" on an "agile project" building a portal for Customer Service. As a result, the internal consumers of this "non-IT" reporting are FAR more conservative about changing ANYTHING in the process and will be very slow to trust any change in those processes. That being said, remaining in this line of work merits broadening your skills to learn how AI can provide a front-end to existing ETL (Extract / Transform / Load) tools that might be in use today to compile data being summarized and audited and mastering those tools.

For entry-level workers, frankly the outlook is easier to predict. The types of work outlined above often provided opportunities for entry level workers or as promotional slots which freed up slots for new entry-level workers. Any management trend that eliminates five or ten percent of jobs in any company, even if the elimination targets mid-career functions, inevitably harms entry level opportunities by reducing the total number of slots for people to move up to to vacate lower level jobs.

It might be possible to suggest (hope?) that one possible outcome of this introduction of AI to push out a share of mid-career workers is that people in many other mid-career roles with aversions to technology may choose to leave those roles as well as they are forced to interact with AI-adjusted processes. If this were to happen, it MIGHT open up remaining positions to people more comfortable with AI, providing an opportunity. Maybe… But those few openings would not only require familiarity with and confidence using AI for assembling that reporting but would still require significant understanding of the underlying business processes being tracked at both a financial and needs-of-the-business level. Domain expertise as it is termed within industries.

For existing employees at all levels, one final general piece of advice would be to begin formulating an understanding in your mind about the degree to which the management above you tends to exhibit a steady hand on the proverbial wheel or follows the latest management fads or advice from expensive consultants that never seem to leave the executive floor in the building. If you feel your management tends to follow fads, especially those in disciplines they already do not seem interested in understanding, I would pay very close attention to the introduction of any tools or systems tagged as "AI", especially when they directly impact your job role. Trust you instincts if you suspect you have become a target for elimination and plan accordingly.

The most useful advice for new graduates and those in college looking at an employment landscape being re-sculpted by AI like a combination tsunami, earthquake and hurricane might be this. Ensure you learn the basic fundamentals of AI from a terminology and functionality perspective. This does not involve reading and memorizing the whitepaper Attention is All You Need and understanding the matrix algebra therein. Ensure you understand how AI systems encode data, train on it, then use it for processing and the shortcomings of those processes. (HINT: Read up on the variability of training data in different languages and the selection of sources that provided the petabytes of data needed to improve system metrics.) If you are still in school, to the extent possible, fit courses providing basic insight into business operations, accounting or marketing into your remaining coursework to gain familiarity with how "business" people think and the terminology they use to describe their work. Over the course of a career, being able to converse in business terms with different teams across a company will be more valuable than knowing how to generate a bitchin' graph of revenue across products over the last two years. If you're a recent graduate still looking for a first full-time job, work to develop a conversational familiarity with the basics of AI and as best you can, ask implicitly or explicitly during any interviews how the prospective employer is approaching AI as it relates to the role you're considering.


WTH

Friday, August 01, 2025

Three Card Monte in the Tax Code

The federal budget deal enacted in July 2025 is generating ripple effects in virtually every state government not only due to cuts in specific federal programs that funnel dollars to states but due to underlying changes in tax rules that flow from federal tax forms down to tax forms at the state level. It is safe to say that most Americans cannot understand how FEDERAL tax changes flow down to STATE tax revenues because most Americans do not actually calculate their own taxes to develop a feel for how a change on their federal tax form ripples through to their state filing. This ignorance makes citizens prone to being misled a second time by state politicians seeking to use unexpected changes in STATE finances to trigger calls for MORE spending reductions at the state level.

As an example, a debate underway in Colorado will be analyzed.

Planning officials in Colorado are estimating the state will suddenly face a revenue shortfall of $1 billion dollars due to the recent federal budget deal. With no change at the STATE level in tax rates, tax rules and state spending, how does any change at the FEDERAL level suddenly create a $1 billion dollar shortfall?

There are five sequentially cascading factors that drive this impact:

  1. The federal government CAN borrow money.
  2. Most state governments CANNOT borrow money. Most states require their legislature to enact balanced budgets EACH YEAR with zero borrowing.
  3. Most state tax codes BEGIN with Federal Adjusted Gross Income as their top line when calculating taxable income at the STATE level. This keeps state-level tax codes simpler and state tax forms easier to understand.
  4. Any change in FEDERAL taxation policy that materially alters federal AGI immediately ripples downward to affect state-level taxable income in nearly every state.
  5. Any change in federal tax rules that alters the TIMING of deductions has a magnified impact on federal AGI and thus state-level taxable income.

One of the changes included in the budget deal enacted in July 2025 accelerated the timing of deductions for research and development expenses for corporations. The stated purpose of this change is to encourage spending on research and development performed within the United States by allowing companies to write off 100% of the expense incurred each year on R&D in that year's taxes. Prior to this new bill, the existing tax rule which changed in 2022 required research and development expenses to be deducted over a period of five years.

But here's the first twist...

Did that tax rule CHANGE in 2022 as a result of legislation PASSED in 2022 under the prior Biden Administration? No. The change to a five-year write-off was actually included in the Tax Cuts and Jobs Act of 2017 passed during the first Trump Administration which set 2022 as the year for the five-year scheme to take effect. This change was actually proposed BY the same lawmakers who pushed for the larger tax cuts included in that 2017 bill to help offset the loss of revenue from those larger tax reductions.

Here's the second twist...

The new budget deal enacted in July 2025 not only reverted the deductibility of domestic research and development spending back to 100% in the year of the expenditure, it included a provision for firms to "catch up" their prior R&D deductions for expenses in years 2022 through 2024. This further magnifies the timing shock imposed by this change in tax policy. Imagine a corporation spending exactly $1 million dollars in 2022, 2023, 2024 and 2025 under the tax rule prior to 2025. Each $1 million of R&D expense would have been spread out over five years from the year it was incurred. The total of $4 million dollars in deductions would have been spread out over nine years as shown below:

2022 -- $200k deduction for 2022(i) - $200k total deduction
2023 -- $200k deduction for 2022(ii) and 2023(i) -- $400k total deduction
2024 -- $200k deduction for 2022(iii), 2023(ii) and 2024(i) -- $600k total deduction
2025 -- $200k deduction for 2022(iv), 2023(iii), 2024(ii) and 2025(i) -- $800k total deductions
2026 -- $200k deduction for 2022(v), 2023(iv), 2024(iii) and 2025(ii) -- $800k total deductions
2027 -- $200k deduction for 2023(v), 2024(iii) and 2025(ii) -- $600k total deductions
2028 -- $200k deduction for 2024(v) and 2025(iv) -- $400k total deductions
2029 -- $200k deduction for 2025(v) -- $200k total deductions
2030 -- $0 deduction

Under the restored "same year 100% deductibility" rule WITH the catch-up rule for the years 2022 through 2024, here are the deductions that same corporation could take beginning with 2025:

2022 -- $200k deduction for 2022(i) - $200k total deduction
2023 -- $200k deduction for 2022(ii) and 2023(i) -- $400k total deduction
2024 -- $200k deduction for 2022(iii), 2023(ii) and 2024(i) -- $600k total deduction
2025 -- $200k deduction for 2022(iv), 2022(v), 2023(iii), 2023(iv), 2023(v), 2024(ii), 2024(iii), 2024(iv), 2024(v) and $1 million for 2025-- $2.8 million total deduction
2026 -- $0 deduction

Note the total amount of deduction being claimed in both schemes is the same -- $4 million – but the TIMING results in significantly less taxable income in 2025 under the second scheme versus the first. In 2025, the corporation will have $2 million dollars less in taxable income than under the prior plan.

That isn't just a change to taxable income at the federal level. It flows through to the state tax return as well but now the state has a unique problem. If the state has dozens of similar companies residing within its borders who are all incurring similar $1 million dollar expenditures for R&D, that state suddenly has a short term cash flow problem. Suddenly, with less than one year's notice, its taxable income for these corporations will drop by $2 million per corporation. In Colorado's case with a flat corporate tax rate of 4.4 percent, the state's tax revenue will suddenly drop by $88,000 per corporation in this hypothetical example.

What about the real world? How big can this research and development tax credit get for a typical corporation? Technically, there's no limit on the amount, only restrictions on the types of expenses that can capture the credit. However, for technology firms and software firms developing new products, the R&D credit not only applies to tools and process changes invented to improve products but salaries of employees designing those systems. For high-tech companies, the R&D deduction can thus be tens of millions – even BILLIONS -- of dollars. A corporation designing a new application with a development team with fifty employees making $150,000 per year could deduct 50 x $150,000 or $7.5 million dollars from taxable income.

Note that at some level, the federal government which instituted this timing change is unaffected by the change in taxable income and resulting tax revenue from this change. Holding spending unchanged year to year, the drop in revenue from taxable income in 2025 can be covered by borrowing money until the following year. But the state government cannot technically borrow money across fiscal years. If it knows its 2025 budget is now unbalanced, it must balance its 2025 budget NOW.

Is this single timing rule change for R&D the single factor that is creating a $1 billion shortfall in the Colorado state budget? Clearly not. But there are other changes in the 2025 budget bill that were likely only discussed by the special interests that placed them in the bill whose impacts are now just being discovered across all fifty states. For states which mirror the federal tax code down to state taxation rules, when TIMING changes at the federal level create FISCAL budget problems at the state level, the only choices available are:

  1. Using a designated "rainy-day" fund to smooth the shortfall into the next budget year
  2. Attempt to pass legislation altering the state tax code and straying from mirroring federal tax code, adding complexity to state tax collection processes and annoying voters
  3. cutting spending to bring it into alignment with lowered revenue forecasts

Some states with Consitutional mandates to run balanced budgets also have laws which impose mandatory, across-the-board spending cuts when such revenue shortfalls occur unless corrected by new spending or tax legislation. Since spending and tax legislation is EXTREMELY difficult to enact in modern, bitterly partisan times, this makes draconian spending cuts easier for politicians to justify versus having to explain their actual votes and priorities for new legislation on taxes or spending. This plays into the hands of "conservatives" whose real goal is to simply cut government spending until the government is powerless to do anything, including policing the streets or responding to natural emergencies.

The more fundamental point here is that this 2025 budget bill is not only triggering these fiscal fire drills at the state level but that the underlying net impact of these changes is a wash at best. A firm spending $1 million in R&D is already a large business that presumably will be around for the next 5-10 years if its "innovations" are producing anything of real value in the first place. Having that firm wait for five years to capture the tax benefit of that $1 million in spending should be immaterial to that company's bottom line, especially if they are spending $1 million dollars every year in R&D. They will be at a point where prior years' deductions are adding up to equal $1 million dollars anyway, essentially given them the full benefit on a year-by-year basis. Accelerating the deductibility is a one-time "goose" to earnings that will do nothing to boost a firm's long term financial performance. It is exactly the type of financial manipulation one would expect in a bill passed under Republican control.


WTH

Friday, July 11, 2025

BOOK REVIEW: Empire of AI

Empire of AI -- Karen Hao – 421 pages (482 with notes and index)

Virtually anyone on the planet with access to a computer or smart phone has experienced profound changes in the nature and quality of "content" presented to them stemming from the use of so-called Artificial Intelligence (AI). People working in fields related to "content" production (musicians, graphical artists, videographers, writers, software developers) have already experienced large reductions in hiring opportunities in these career fields as large companies attempt to adopt AI-based processes that can lessen their need for humans in these roles. Others looking at larger issues of intellectual property rights, resource allocation and prioritization, environmental impacts and economic power have been attempting to raise concerns with politicians and regulators to seemingly little avail.

Karen Hao's book Empire of AI was written to address the relationships between these different areas of concern and, in doing so, identify an underlying pattern of government, business and social abuse that provides a more meaningful perspective from which to combat all of the problems posed by Artificial Intelligence. The book is subtitled Dreams and Nightmares in Sam Altman's OpenAI but the saga of OpenAI is not the sole focus of the book nor, as the book makes clear, should it have been. The drama within OpenAI is merely a specific example of a larger trend seen in virtually every large, "successful" technology startup over the past twenty years.


The Book Review First

The mathematics and computer science theory behind AI are obviously extremely complicated and generate an unlimited number of potential tangents into business management, interpersonal communication and typical business posturing and deceit . However, the consequences for society stemming from the rush to be first in monetizing AI capabilities are far broader and involve economics, labor rights, natural resources, politics and – bluntly – existential risks, depending on how blindly humans allow the technology to be put into control of critical infrastructure and decision making related to national security.

I have a background in engineering, large scale software systems, executive level business management and more than a passing interest in economics, politics and public policy. Reading Empire of AI made it clear how carefully the author chose to thread the key themes of the book together and how precisely the points were chosen for switching between topics to highlight inter-dependencies without getting stuck in the weeds or failing to cement a point with the proper level of detail. Enough of the outline of the problem is provided in the opening pages of the book that the content never leaves the reader frustrated that X hasn't been covered yet when you know it needs to be. As the first few pivots in the narrative occur from X to Y to Z, the exposition quickly makes it clear why the narrative needed to swing from X to Y to Z at those points so the reader stops second-guessing the author and the larger narrative just unfolds naturally.

The exact themes covered by the book will be addressed below but the bottom line review of this book is this...

Empire of AI is easily one of the most consequential books published in the past few years. It wasn't written to cash in on the latest trendy technology fad or business success story. It was written by an author who has been reporting in the field for ten years and has consistently looked beyond the business and financial hype to understand all of the underlying effects of the technology, both in its development and application going forward. For professionals working in these fields or individual citizens of any country, Empire of AI provides a thorough understanding of AI and the business models adopted by those developing it. The book also makes it clear how AI is simply the latest technological iteration of an established pattern of colonialism and empire building that facilitates the further concentration of power and wealth to the few at the expense of everyone else.

Buy or borrow this book and read it cover to cover. That's as brief a review as I can muster.


Key Themes in Understanding Artificial Intelligence

Throughout the content of Empire of AI, these key themes are emphasized:

  • AI is intrinsically tied to deception and inscrutability.
  • Multiple philosophies for creating AI exist but only the most resource-intensive approach is being pursued.
  • Concerns over implementation strategies have converged into two distinct camps, referenced throughout the field as Boomers (those who believe the best way to mitigate risks from AI is to achieve AGI as soon as possible so it can assist with mitigating its own risks) and Doomers (those who fear releasing every-more-powerful AI systems without understanding their core functionality to implement and verify safety controls poses an existential risk to humanity).
  • AI development fixated on large language models has systematically violated the intellectual property rights of prior content creators.
  • AI development has required massive human labor to score AI determinations to improve learning and has consequently exploited tens of thousands of workers in failing economies to work for poverty wages and be exposed to millions of images and videos of vile, abusive, sexual content.
  • AI development has required massive build-outs of data centers DWARFING prior concepts of "data centers" whose power and cooling requirements potable water supplies in already drought-stricken areas and impose strains on local power grids whose costs are often being shifted to the public.
  • In whole, the race to develop AI technology is pure colonialism and empire building – the TAKING of property controlled by others, the PROTECTION and SUBSIDIZATION of private commercial enterprises by governments , the EXTERNALIZATION of all negative consequences from parties enjoying the benefits and the CONCENTRATION of both power and wealth into existing elites.

Note that none of these themes have anything specific to do with OpenAI. They are equally applicable with any firm attempting to develop or apply AI technologies. The content within the book unique to OpenAI reflects its own set of recurring themes:

  • OpenAI's corporate structure emphasizing its focus on ensuring "open" research into AI over profit-making opportunities was fundamentally a bait and switch, not only for attracting investors but engineering talent under false pretenses.
  • OpenAI's management ranks reflect a pattern seen in every other large tech business reaching multi-billion dollar evaluations – engineers and leaders selected for abilities to deliver technology in cutting edge fields OFTEN present personality traits ill-suited towards honest, effective communication and management of personnel and OFTEN reflect traits leading to constant personal conflicts and a blinkered perspective on the impacts of the technologies being developed.
  • In particular, Sam Altman has demonstrated these flaws throughout his career and, as such, is possibly one of the worst people one could select to be managing a firm that is aiming at creating a technology approaching "artificial general intelligence."

Again, for the full picture, buy or borrow this book, then read it cover to cover. Knowing that many will not, many of these themes are worth an attempt at summarizing, if only to at least provide a cocktail party level of understanding in the topics, if not to encourage a full read of the book.


AI Is Intrinsically Tied to Deceit and Inscrutability

Research into developing algorithms that could be implemented on computers to provide human-like interpretation and reasoning capabilities began in earnest in 1956 under a rather dry, academic sounding term automata studies. One of the original researches realized the term lacked a certain pizazz and landed on another: artificial intelligence. As Hao summarizes,

The name artificial intelligence was thus a marketing tool from the very beginning, the promise of what the technology could bring embedded within it. Intelligence sounds inherently good and desirable, sophisticated and impressive; something that society would certainly want more of; something that should deliver universal benefit. The name change did the trick. The two words immediately garnered more interest – not just from funders but also scientiests, eager to be part of a budding field with such colossal ambitions.

Cade Metz, a longtime chronicler of AI, calls this rebranding the original sin of the field: So much of the hype and peril that now surround the technology flow from McCarthy's fateful decision to hitch it to this alluring yet elusive concept of "intelligence." The term lends itself to casual anthropomorphizing and breathless exaggerations about the technology's capabilities.

As Hao explains, use of the term "intelligence" with these technologies poses two related problems. First, without any scientific definition of what "intelligence" actually is, it is impossible to devise an objective measure to know when it has been implemented. Decades ago, the "Turing Test" was viewed as a crucial milestone – the ability of a human to interact with an "entity" and not be able to discern whether that entity was a real human or a computer. Technology evolved by the 1980s and 1990s to eclipse that threshold and suddenly the goalpost was moved to expect abilities to process and respond to visual data. Clock rates and available memory improved enough for those capabilities to be achieved in the 2000s. Now the goals have expanded into synthesizing entire songs or writing novels, etc.

The point here is that the inability to DEFINE what intelligence IS allows those in the field to conveniently shift expectations when doing so suits their financial or legal advantage. At the same time, the undefined nature of the very word intelligence means that ANY conversation between ANYONE involved in the field is inherently incapable of reflecting a meeting of the minds because the core word behind the discussion has no concrete meaning scientifically, legally or ethically.


Symbolism Versus Connectionism

The capabilities and underlying infrastructure commonly associated with AI today reflect a crucial conceptual choice that was made decades ago between two alternative approaches for modeling information for use in "AI"-like functionality. Put simply, there are two approaches that can be taken to design systems to abstract information into machine-processable forms then create algorithms to act upon those abstractions to do something useful. One approach involves optimizing the model used to represent the discrete units of knowledge to be housed in the system – this focus is referenced as symbolism. Another approach involves keeping object models more simple but focusing on relationships between them – this focus is referenced as connectionism.

The key difference between a symbolism approach and a connectionism approach is that approaches focused on symbolism generally require more HUMAN thought and inventiveness. The value of the symbolism stems from combining enough attributes about a thing to make it easy to find linkages to other objects while keeping the total data required to reflect that model relatively small. It isn't helpful to model a complex concept with a single letter only containing 8 bits but it isn't useful to have 30,000 bytes used to reflect an instance of that object either.

In contrast, the value of the connectionism approach is that, for some types of data models, a connectionism approach can boil down to pure mathematical reduction. A greater number of "connections" / associations / linkages can be modeled by logically assigning larger mathematical matrices to the object and performing more matrix mathematical calculations on the content of those matrices. Humans know enough about how to program computers to perform matrix mathematical operations that the SIZE of those matrices isn't a human problem. Just throw hardware and memory at the work. The problem this brute force connectionism approach is the resulting data produced to matrix operations on TRILLIONS of data points is completely indecipherable to any human, including those who devised the algorithms.

Until the 1980s, these two approaches were essentially neck and neck in popularity within the field, partly because both approaches were equally crippled by the limitations of memory and processing speed at the time. Memory limitations impaired the complexity of models that could be created for objects while the lack of networking protocols prevented smaller computers from being linked together to share tasks, impairing the ability to advance "neural network" algorithms. As computing performance began accelerating in the 1980s, those conducting AI research could foresee where Moore's Law would take computing power. At that point, research became almost entirely focused on connectionism based algorithms.

This shift to connectionism had PROFOUND impacts on the AI field and has PROFOUND impacts on society today. First, the matrix model nature of the connectionism approach means that improving model quality generates EXPONENTIAL growth in computing resources needed to create an AI system. It also requires EXPONENTIAL growth in the volume of data required to compute all of those trillions of connection probabilities. It also consumes EXPONENTIAL amounts of compute to USE the results for end-user requests after the system is developed and trained. However, for certain types of problems that provide financial rewards to solve, the connectionism approach DOES work... up to a point. This has further limited the quantity of people and financial resources involved with research in the symbolism realm. Yet not all problems that might benefit from a generic artificial intelligence are best served by connectionism-centric models.

This is worth restating in a couple of different ways.

The connectionism approach reflected in the Large Language Model systems prevalent today is suited for using TEXT data and SYNTHESIZING new output that may be required to follow expected styles or conventions or is needed to quickly SUMMARIZE a given text input into an alternate form that's "close enough." A connectionism-based system is ALWAYS going to generate ouptut based upon PROBABILITIES but can NEVER, EVER guarantee a correct "answer" to any specific input without significant "wrapper code" adding guardrails for well-defined criteria. The connectionism approach is NOT optimal for spotting correlations between a continuously changing set of inputs to boil them down into a smaller set of information according to rigidly established formulae. That type of problem is best solved using "machine learning" paradigms which are vastly different than connectionism models and – for what they do – are more compact and processing efficient. These technologies have been productized and sold commercially for over a decade and do not require hundreds or thousands of servers to generate results even for huge applications.

Another way of conveying the same point? It has become common for thousands / millions of users to use ChatGPT or similar large language model AI systems as their search engine. Even Google is now including AI "search results" in its regular web search output in an effort to complete. This is equivalent to booking a flight on the Concorde SST every time you decide you need to take a "jet" from point A to point B. That's no exaggeration. The author Hao referenced data in a research paper written by Sasha Luccioni, a climate lead at a competing AI firm named Hugging Face. Luccioni estimated every single AI-generated image likely consumed the amount of energy required to charge a cell phone to 25 percent. One thousand AI-generated images might equate to 242 full cell phone charges. If people are beginning to use AI to generate video content, the energy consumed to synthetically create hi-res video for a 5-10 minute video clip becomes staggering. But even for traditional search usage, LLM based AI models are GROSSLY less efficient than traditional web searches with twenty five year old indexing technologies. And MILLIONS are now using ChatGPT and other engines in exactly that fashion.


The Demand for Training Content: Theft and Crap

As stated previously, a crucial consequence of the industry's complete fixation on connectionism based technologies is that connectionism-based solutions are EXPONENTIAL in their use of resources. Resources are not limited to the computing power used during training or the computing power to operate the resulting "trained" model for production requests from users. The exponential resource demand also involves the training data itself. This has profound legal and societal implications.

As Large Language Model approaches first began development in the mid 2010s, the data sets fed to them were typically on the order of tens or hundreds of megabytes. That amount of data helped validate different theories for underlying data structures and computations required to improve "connection" probabilities but engineers quickly found that LINEARLY increasing the number of "tokens" considered in calculating probabilities – essentially the "depth" of the system's memory when generating a response – required EXPONENTIAL increases in the data used in training. Exponential increases in compute and memory could be solved by simply throwing money at the problem and renting more compute across various data centers. But TEXT DATA is not as easy to obtain at the drop of a hat. At least, it's difficult to obtain if you are going to ASK for permission. And it's not easy to obtain in identical quality at 10x or 100x or 1000x of current volumes. So OpenAI (and other firms) simply didn't ask for permission. They used web crawlers and simply pulled in more content from public web sites as data needs grew. As data needs grew past the limit of what was available on formally curated, edited, secured content, OpenAI and other firms simply lowered their standards for what would be accepted and slurped in more data from lower quality tiers of content. Lower quality in terms of its veracity and, in MANY cases, the content involved, including racist rhetoric and sexually abusive text and imagery.

Even if these legal and ethical problems are completely discounted, the larger problems with models requiring exponential increases in training content should be obvious to the average mathematician, much less engineers working on the cutting edge of this technology. First, many of the secondary and tertiary sites sucked into the training maw of OpenAI and competitors were web portals offering peer-to-peer help for solving problems across a multitude of disciplines – software engineering, electrical engineering, Linux system administration, data center operations, etc. Anyone who has USED these sites can immediately spot huge problems with this strategy. First, every thread STARTS with a question posed by someone who BY DEFINITION doesn't know what they're doing. They may not frame their question correctly, they may mis-state their initial conditions, etc. These sites typically implement some form of reputation scoring which requires users without prior history to provide answers that get "up-voted" by those already with reputation points before their answers are promoted as viable. Yet scraping techniques likely were unable to distinguish between "ACCEPTED" content and presumably lower quality content submitted by untrusted users. ALL of the thread content was fed into the training process.

This assumption that MORE "data" will always improve model quality is logically insane. If you are given a cup filled to the brim with water but told the cup has one drop of dioxin in it, would you drink the water? What if someone offered to transfer the cup to a gallon jug and fill it with MORE water but they also told you the extra water would have two drops of dioxin in it. Would you drink water from the gallon? Repeat the process for a 5-gallon jerry can... For a 55-gallon barrel.. For a tanker truck... For a tanker ship. Are you feeling any better about the safety of that drinking water? Of course not. Any process that claims to clean BAD material by adding MORE material will never work but ESPECIALLY if the new material is of LOWER quality than the starting material. This is the current state of AI systems in use.

One far more subtle point that Hao makes in the book is that this exponential demand for electronic text is acting as an explicit filter on the types of thinking that are getting "learned" by AI systems. How? The need to scan PETABYTES of text data essentially requires that data to be online. No one has the money to scan books written in thirty languages then perform optical character recognition on those images then feed THAT text into training sources. So what IS getting fed into AI training? The easy answer is whatever is on the internet. In 2025, the Internet Society Foundation estimates that 55% of all Internet content is in English. What is stunning is the next highest language in use is Spanish but with only 5% of content. The shares get smaller from there. Ideas and idioms that might have unique expressions in other languages not popular on the Internet are NOT making it into training data sets and thus do not influence AI output. Spending BILLIONS to ingest content restricted to a handful of languages is thus leaving behind millennia of accumulated insight that might only be present in the color and idioms of languages not important enough to have gained a foothold on the Internet.


Exploitation and Abuse of Human Labor

To counteract the impact of "lowering the bar" on training data quality, firms developing AI systems devised processes by which training inputs and system outputs could be reviewed by humans to provide "scores" for various images and content regarding the presence of sexual imagery, sexual abuse, extreme violence, etc. The scores were fed back into training so probabilities could be manipulated to detect and avoid generation of inappropriate output. This is NOT pleasant work for anyone to do. But AI firms took advantage of worldwide internet access and economic / political strife in second and third world countries and farmed this via work out to people making the equivalent of ten dollars per day. Ten dollars per day to look at content that might be the WORST possible content you could imagine encountering because, remember, bigger models need PETABYTES of data and the only place we can get that much data is on fringe content sites.

This approach for farming out nasty work via "AutoTurk" style systems to people in dire economic circumstances is where the book begins cementing together the author's larger themes about empire building. Firms like OpenAI explicitly sought out third-party firms that had already created "piece work" content categorization systems that could employ thousands of people anywhere on the globe to do the unpleasant work. But in a world with legitimate economic opportunities, no one would voluntarily do this work. Instead, this work was predominately done in countries like Chile, Uraguay and Venezuela where political and economic upheavals suddenly yielded tens of thousands of English-speaking, educated workers with home computers and internet connections who suddenly had no other job opportunities and were prevented from leaving the country to find opportunities elsewhere. Hao actually references prior writings about this exploitative form of "disaster capitalism", particularly Naomi Klein's The Shock Doctrine, while tying this strategy into the larger theme of empire. These references are very apropos.


The Asperger's Generation of Corporate America

One thing becomes clear in Hao's coverage of specific events related to OpenAI as an operating business and the behavior of its senior leadership. We are now twenty years into what might be termed the Third Wave of Computer and IP technology. The first wave for "personal computing" could be loosely defined as the period from 1976 to 1993 as technology evolved to make individual computers financially viable for both home and business users. The second wave for "networking" lasted from 1994 with the advent of AOL then always-on home broadband connections to the Internet to 2004 when reasonably fast computers with reasonably fast network connectivity became the norm. The third wave for "social media" and "cloud hyperscaling" began in 2005 with sites such as YouTube and Facebook that converted users and their metadata into the product then advanced with technologies aimed at mining that metadata into enormous databases for further analysis and manipulation.

One thing in common to each of these generations of technical evolution is they all resulted in small disrupting firms suddenly become leviathans in the economy in very short periods of time. Apple, then IBM, then Microsoft in the first wave. Cisco, then Yahoo, then Google in the second wave. Facebook/Meta, then Amazon, then OpenAI in the third wave. In each of these waves, at least one of those disrupting leviathans was founded and led by highly intelligent, maniacally competitive people with obvious talent for their field but who were also not, as one might say, "hooked up right." The terminology wasn't in common use at the beginning of this simplified history but it is certainly widely used now. Some of these leaders demonstrate traits that place them signficantly off of center in the spectrum of "average" thinking and interpersonal communication skills.

One topic the author touches on when covering events at OpenAI is a "movement" that gained popularity in the 2000s and gained a name in 2011 – effective altruism, or EA. Briefly stated, EA is a term used to describe a set of priorities individuals can use to "optimize" the ultimate "net good" of their life, both for themselves and larger societal interests. When an individual becomes aware of a societal problem, they face a choice. Do they IMMEDIATELY, PERSONALLY engage in work to correct that problem? Or do they continue doing something that might be more financially rewarding to them in the immediate future and make charitable contributions to someone else who can IMMEDIATELY work the problem? Or do they continue their own career, working as hard as possible to climb the corporate ladder, make as much money over their career, THEN give some vast some of money away in the distant future to someone working that problem?

Extremely smart people skilled in areas of mathematics and science love to think of problems in terms of equations and rates of change (derivatives) and optimization. Extremely smart people skilled in these areas who are also on the Asperger's spectrum are even more prone to such thinking. Their skills make them HIGHLY valuable in the modern economy so they are often very well paid and they think of problems in systematic, global ways. Adopting EA as an organizing principle of life can be a crutch for some people to use to avoid legitimate conflict and remain on a path that is, in fact, doing great global harm. The EA thinker may conclude their efforts ARE causing some problem that might be marginally bad but by continuing this work, my lifetime net worth will be 50X instead of 5X and I can use 50X money to do good later and keep my total lifetime "worth" above the current 5X, thus optimizing the world.

That's what EA adherents like to believe. What they fail to understand is that their often blinkered understanding of the entire picture means they are not accurately assessing the potential harm of their short term actions. Despite their mathematical bent, many EA adherents fail to comprehend the cumulative, exponential damage caused by the short term damage they may recognize but discount. They may also be over-estimating their accumulated wealth, having little understanding of business cycles, wholesale fraud or the duplicity of those around them that may very well leave them with nothing to show for their efforts.

Of course, all of this assumes the EA adherent being discussed is being honest about their beliefs and motivations. Hasn't it been refreshing to hear these mop-haired, precocious 25-year old billionaires appearing on a panel in front of two thousand people at a trade show talking about the need to "give it all back" to the community? What if discussions of EA are simply part of the calculus these startup billionaires are using in their attempt to disarm critiques of their business practices and the potential for abuse of their creation and resulting wealth?

Moral / Ethical Tunnel Vision

The career path details shared by the author about many of the key players in AI reflect a not-so-obvious but crucial similarity to the paths of many tech titans of the first three bubble waves of modern technology – the PC era, the Internet era and the current social media era. Many AI business and science leaders had the intellectual chops to gain acceptance into the world's top institutions – Harvard, Stanford, MIT, etc. But like many of their counterparts in earlier tech bubbles, many didn't COMPLETE a college degree. Some abandoned college after only a year or two of course work. They saw enough overlap between their coursework and business opportunities opening up in the real world, saw zero expectations or requirements for holding a degree before entering those businesses and promptly quit to pursue some combination of money, influence and power.

If this were more infrequent of an occurrence, it might not be that much of a concern. When the pattern holds for a significant number of influential people in these critical fields, it becomes a great concern. Maybe they might have just taken one or two more years of courses in their core engineering or science degree program and simply emerged as an even smarter but narrowly trained wizard. But that's not what college is supposed to be, even for engineers and scientists. Most degree programs require some coursework in non-degree fields. For engineers, such coursework might involve psychology, economics, business and accounting or maybe ethics and government policy. A single class in any of these disciplines won't yield Nobel Prize winning expertise but it might break through the self-imposed tunnel vision of someone who started narrowly focused on technology and didn't even complete that degree program.


AI – Creating a Modern Empire

All of the prior themes summarized above are part of Hao's overall thesis of the empire-building nature of AI. However, Hao wastes little time in laying out that thesis of the book. On page 16, the entire premise of the book is layed out in crystal clear language worth quoting here:

Over the years, I've found only one metaphor that encapsulates the nature of what these AI power players are: empires. During the longer era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires' enrichment. They projected racist, dehumanizing ideas of their own superiority and modernity to justify – and even entice the conquered into accepting – the invasion of sovereignty, the theft, and the subjugation. They justified their quest for power by the need to compete with other empires: In an arms race, all bets are off. All this ultimately served to entrench each empire's power and to drive its expansion and progress. In the simplest terms, empires amassed extraordinary riches across space and time, through imposing a colonial world order, at great expense to everyone else.

The empires of AI are not engaged in the same overt violence and brutality that marked this history. But they, too, seize and extract precious resources to feed their vision of artificial intelligence: the work of artists and writers; the data of countless individuals posting about their experiences and observations online; the land, energy, and water required to house and run massive data centers and supercomputers. So too do the new empires exploit the labor of people globally to clean, tabulate, and prepare that data for spinning into lucrative AI technologies. They project tantalizing ideas of modernity and posture aggressively about the need to defeat other empires to provide cover for, and to fuel, invasions of privacy, theft, and the cataclysmic automation of large swaths of meaningful economic opportunities.

The Coup at OpenAI

Anyone with even cursory familiarity with OpenAI as a company will likely expect AI Empire to answer the question everyone in the industry was asking regarding events in November of 2023, events that arguably constitute one of the most bizarre power struggles in the history of Corporate America or Corporate Anywhere. WHAT THE HELL IS GOING ON IN THAT COMPANY? Hao addresses the events but does so near the end of the book. At that point, it becomes very apparent how the "coup" was nearly inevitable given the numerous unresolved personnel management issues within the company.

What exactly happened?

Publicly, OpenAI's board fired its CEO Sam Altman on Friday November 17, 2023. OpenAI's CTO was immediately named interim CEO. A few hours later, OpenAI's chairman Brockman resigned followed by three key technical leaders within the firm. By Saturday, November 18, reports were appearing stating the board was already negotiating to hire Altman back. By Sunday November 19, Altman and Brockman were negotiating in person at OpenAI's building to return. After those discussions failed, the board announced a new CEO and Microsoft instantly announced the hiring of Altman and Brockman and the three technical leaders under him into a new AI division at Microsoft. On Monday, November 20, an open letter began circulating that eventually gained signatures from 745 OpenAI employees (of 770 total) threatening resignations if the board didn't resign immediately. At that point, the board caved and by November 21, Altman was re-instated as CEO and the board was restructured with three new outside board members.

Just a little communication faux pas. Not that important really. In fact, people at OpenAI now try to downplay the entire incident by referring to it as The Blip.

But what REALLY happened?

In summary,

  • Over the two years prior to November 2023, conflicts between teams responsible for Research versus Safety regarding spending and delivery timelines had resulted in increased staff churn and tensions at leadership levels.
  • In early October 2023, distinct members of OpenAI's board were independently approached by two different senior OpenAI leaders with concerns about situations in which CEO Altman provided conflicting answers and direction to different company leaders.
  • In the course of investigating those communication concerns, two different board members who talked to each other realized Altman had conducted private conversations with each of them mis-stating what the other board member had stated to him.
  • Two senior leaders confirmed to the board they would back a decision to oust Altman from the company – one confirming they would agree to be interim CEO, the other confirming they would stay on with OpenAI to continue leading a core team.
  • The board acted and fired Altman but failed to devise a message that appropriately articulated WHY Altman was fired, leaving employees worried about their personal wealth tanking from a potential plummet in OpenAI's value to demand his reinstatement "or else."
  • Ultimately, none of the senior employees who first notified the board then offered their support for replacing Altman held their ground – they caved and signed the open letter demanding his return.
  • Ultimately, the very board members who themselves had been lied to by Altman about their own conduct refused to stick to their guns, offering to return him to the company and agreeing to vacate their board positions.
  • Ultimately, the 745 employees who signed the open letter demanding Altman's return did so despite their own concerns about prioritization of functionality over safety and their own observations of the tension and strife within the company stemming from Altman's manipulative communication patterns.

A real profile in courage on the part of everyone involved, huh?


The conclusion one reaches from reading of the events surrounding this "coup" and the entire book Empire of AI is that it is perhaps the epitome of what to expect in a situation where the stakes involve billions of personal wealth, business leaders with severely deficient business and personnel management skills, and a workforce who themselves lacked the expertise and sophistication to comprehend the ethical nature of the underlying problems. The vast majority of participants in the drama from top to worker bee all chose the most expedient and lucrative path over the ethical path. If they were just making Milky Way candy bars and the disputes involved changes to the nougat recipe, no one would care. If they are developing technologies that can further concentrate power and trigger massive economic strife across the entire world, these are not the type of people you want in control.


WTH