A Misallocation of Capital

The $1.1 Trillion Bet That May Never Turn On

Part Two of The Future of AI series. Part One: “Beyond Silicon: Toward Living, Evolving, Self-Healing Computation” (Dissolution Too, Chapter 11, April 2026)


Part One: The Scale of the Bet

In February 2026, the largest single month of startup funding ever recorded produced $189 billion in new commitments — almost entirely directed at artificial intelligence. OpenAI alone raised $110 billion in that month, at a $500 billion valuation, from a company that has never turned a profit and shows no near-term path to one.

This was not an aberration. It was the culmination of a trajectory. Five companies — Amazon, Microsoft, Google, Meta, and Oracle — have collectively committed more than $660 billion in capital expenditure for 2026 alone. Amazon projects $200 billion in capex for the year. The total hyperscaler AI infrastructure spend in 2026 reaches approximately $800 billion; by 2027, projections approach $1.1 trillion. The Stargate Project — the OpenAI/Oracle/SoftBank consortium — has announced $500 billion over four years, with a stated ambition of $5 trillion in data center infrastructure between 2026 and 2030.¹

These numbers require a moment of stillness to absorb. Five trillion dollars is approximately the annual GDP of Japan. It is being committed, largely through debt financing, to build physical infrastructure — concrete, copper, silicon, cooling systems, power substations — over a five-year period, against assumptions about AI revenue growth, architectural scalability, and physical infrastructure availability that are, as this article will demonstrate, questionable at best and demonstrably wrong in several particulars.

The dot-com parallel is inexact but instructive. In 1999 and 2000, approximately $1 trillion in capital was committed to fiber optic infrastructure — the “dark fiber” buildout that would carry the internet traffic everyone knew was coming. The traffic did eventually come. But it arrived a decade late, by which point the companies that had committed the capital were bankrupt, the infrastructure had been acquired at pennies on the dollar in bankruptcy proceedings, and a new generation of companies — Google, Facebook, Netflix — built their fortunes on fiber that had already been written off.

The data centers being built today may be the dark fiber of the 2030s. Not worthless — but dramatically less relevant than their construction cost implies, stranded by physical constraints that were visible before the capital was committed, and potentially obsolete before the debt that financed them matures.


Part Two: Two Articles, One Question

Chapter 11 of Dissolution Too documented the destination: where artificial intelligence is heading, what architectures are being built into it, what it will mean for human agency and civilization. The substrate deployment. The control architecture. The trajectory toward something that has no historical precedent.

This article asks the prior question: given that this technology is genuinely transformative, are we building the right version of it? With the right architecture? Through the right process?

The answer is almost certainly no. And understanding why the wrong version is being built is essential context for understanding what Chapter 11 documented.

If AI development were proceeding through genuine market discovery — capital following price signals, resources flowing toward what actually works, genuinely different architectural approaches competing on their merits — then the future documented in Chapter 11 might be somewhat inevitable, given the technology’s nature. An accelerating intelligence would find its own path regardless of the specific chips it ran on.

But that is not what is happening. What is happening is $1.1 trillion flowing to a single architectural approach — transformer-based large language models running on GPU clusters — at precisely the moment that approach may be approaching its ceiling. Alternative architectures that could solve the fundamental problems of the current approach receive, in some cases, 160 times less capital. The specific version of AI being built reflects the incentives of the entities doing the building, not the logic of the technology itself.

The future of AI isn’t being discovered. It’s being imposed. By capital allocation decisions made by entities with $3 trillion market caps to protect. The control architecture, the centralization, the trajectory documented in Chapter 11 — these aren’t the inevitable product of AI development. They’re the product of a specific, identifiable, critiquable set of choices made by specific people for specific reasons.

Understanding the misallocation is understanding why this particular future and not another one.


Part Three: The Architectural Problem

The entire $800 billion bet rests on a single architectural assumption: that transformer-based large language models will continue to scale indefinitely, producing proportionally better results as more compute, more data, and more parameters are added.

This assumption is showing signs of breaking down.

The transformer architecture — introduced in the landmark 2017 paper “Attention Is All You Need,” by eight researchers, none of them representing a major capital deployment — has a fundamental mathematical property: quadratic computational complexity. The compute required scales with the square of the sequence length being processed. This is manageable at current scales. It becomes increasingly burdensome as models are asked to handle longer contexts, more complex reasoning, and more sophisticated tasks. The architecture is not broken. It may be approaching a local maximum.

DeepSeek’s R1 model, released in early 2026, demonstrated comparable results to leading American models at a fraction of the compute cost — achieved through algorithmic efficiency rather than hardware scaling. It was a significant and underreported signal: the scaling assumptions embedded in hundreds of billions of dollars of capital commitment may be answering the wrong question. More compute may not be the path forward. Smarter algorithms may be.

The honest question that the capital markets are not asking: are we funding the steam engine at precisely the moment the electric motor is being invented?

Murray Rothbard identified the mechanism in 1959, writing about the Sputnik-era demand for massive government science spending:

“The myth has arisen that government research is made necessary by our technological age, because only planned, directed, large-scale ‘team’ research can produce important inventions or develop them properly… This common myth has been completely exploded by the researches of John Jewkes, David Sawers, and Richard Stillerman… Taking sixty-one of the most important inventions of the twentieth century… Jewkes et al. found that more than half of these were the work of individual inventors — working at their own directions, and with very limited resources.”

The “Attention Is All You Need” paper had eight authors. The Mamba/SSM architecture that may make transformers obsolete for many applications came from a small academic team. The breakthrough, when it comes, will not come from the entities with the most capital. It will come from a researcher who could not get internal support for an unconventional idea at one of those entities, and left.


Part Four: What’s Being Built — And What Isn’t

The $800 billion architecture:

The current buildout is physically enormous. Nvidia GPU clusters consuming 100 to 500 megawatts per facility. The Colossus facility in Memphis — the one Elon Musk recently rented to Anthropic after calling it “Misanthropic” — runs on methane turbines because the local grid cannot supply two gigawatts. The most advanced AI training facility on earth is powered by what are essentially jet engines, because utility power at that scale is unavailable. This is not a temporary workaround. It is the permanent condition of an architecture that requires power at a scale the grid was not built to supply.

The fundamental inefficiency is architectural. The von Neumann design — the basis of every GPU and CPU in every data center — separates memory and processing. Data must physically travel between them for every computation. At the scale of modern AI training, this movement of data is itself a massive energy cost. The architecture is not inefficient because of poor engineering. It is inefficient because of a foundational design choice made in 1945 that has never been fundamentally challenged.

The underfunded alternatives:

Behind the marketing jargon that attaches “quantum” to everything trending, there are real potential breakthroughs in alternative AI architectures. Two deserve specific attention because they directly address the power problem at the physics level rather than the engineering level — and because their combination may represent the “revolutionary something” that finally makes neuromorphic computing competitive after fifty years of promising but not delivering.

Tunnel Field-Effect Transistors (TFETs): The conventional transistor switches by thermal emission — electrons surmount an energy barrier when enough voltage is applied. This imposes a fundamental minimum switching energy governed by the Boltzmann constant: approximately 60 millivolts per decade of current change at room temperature. This is the thermodynamic floor below which conventional transistors cannot go, regardless of how well they are engineered. TFETs switch by a completely different mechanism: quantum tunneling, in which electrons pass through an energy barrier rather than over it. This is not exotic physics — Fowler-Nordheim tunneling through MOS oxide barriers has been a documented phenomenon since the 1980s, and at current sub-5nm process nodes, quantum tunneling is already occurring whether engineers want it to or not. The TFET harnesses it deliberately, achieving subthreshold slopes below 60 millivolts per decade — breaking the thermodynamic floor that constrains all conventional CMOS design.²

The consequence is striking: TFETs can achieve switching energies in the attojoule range — roughly a thousand times more efficient than CMOS neurons for neuromorphic applications. They operate optimally in the megahertz range rather than gigahertz — which sounds like a limitation until you recognize that the human brain operates in the kilohertz range and achieves far more sophisticated cognition per watt than any silicon system yet built. The TFET’s inability to switch at extreme frequencies is not a defect. It is a feature, precisely matched to the operating regime of biological neural computation.

Spintronic memory (STT-MRAM): Conventional DRAM and flash memory use electric charge as the storage medium — charge that must be continuously refreshed, consuming power even when no computation is occurring. Spintronics uses the quantum property of electron spin instead. Spin-Transfer Torque Magnetic RAM (STT-MRAM) is already shipping commercially — Everspin and others sell it for industrial and automotive applications. It is non-volatile (no refresh power), radiation-hard, and switches at speeds and energy levels that DRAM cannot match for certain operations. The AI in Spintronics market is projected to grow from $1.84 billion in 2025 to $7.11 billion by 2030 — significant, but still a fraction of the GPU market it could eventually displace for neuromorphic memory applications.³

Together — TFET processing and STT-MRAM memory — these technologies address the two fundamental physical constraints of the current architecture: the thermodynamic switching floor and the von Neumann data movement cost. Neither is science fiction. Both are in production, at limited scale, receiving dramatically less capital than their potential warrants.

Other underfunded architectures:

Processing-in-Memory (PIM) eliminates the von Neumann bottleneck by performing computation where data lives. The entire market for PIM was $231 million in 2025 — against $800 billion for conventional GPU infrastructure. Photonic computing uses light rather than electrons, achieving sub-nanosecond latency and near-zero thermal losses for matrix operations. State Space Models (SSMs/Mamba) achieve linear rather than quadratic computational complexity. JEPA (Meta/LeCun) predicts in abstract embedding space rather than reconstructing tokens — a fundamentally different philosophy about what intelligence requires.

Each of these receives a fraction of transformer investment. Each addresses a documented limitation of the current architecture. The ratio of capital committed to the incumbent versus the alternatives is not a market signal. It is a distortion.

📖 Read More: The Efficiency Gap Intel’s Hala Point neuromorphic chip: 1.15 billion neurons, 128 billion synapses, 2,600 watts maximum. A human brain: estimated 86 billion neurons, 100 trillion synapses, 20 watts. The efficiency ratio is approximately 100 million to one in favor of biology. The commercial neuromorphic market in 2025: approximately $50 million in revenue. The GPU market: approximately $130 billion. The ratio of capital committed to the less efficient architecture: roughly 2,600 to one.


Part Five: The Physical Wall

The architectural problems are theoretical — visible to anyone who examines the scaling assumptions carefully, but easy to defer. The physical infrastructure problems are not theoretical. They are happening now, documented by utility commissions, grid operators, and the data center developers themselves.

The power constraint:

Nearly half of US data centers planned for 2026 have been canceled or delayed — not from lack of capital, but from lack of power. Of 12 gigawatts of capacity planned for 2026, only 5 gigawatts is under construction. High-voltage transformer lead times have stretched from 24-30 months to five years. Grid connection processes require 3 to 7 years. Data centers build in under 3. The interconnection queue holds 2,100 gigawatts of requests — exceeding total US grid capacity. Gartner projects that power shortages will restrict 40% of AI data centers by 2027.⁴

A January 2026 report by Bloom Energy projects that US data center power demand will nearly double between 2025 and 2028, from 80 to 150 gigawatts — the equivalent of adding a country with the energy needs of Spain to the power grid in three years. The grid was not built to absorb Spain in three years.

The water constraint:

In 2024, residents of Annelise Park in Fayetteville, Georgia noticed their water pressure dropping. A county investigation found the cause: QTS Data Centers — owned by Blackstone — had installed two industrial-scale water hookups to the county system without the utility’s knowledge, consuming nearly 30 million gallons of water unbilled for nine to fifteen months. The county billed QTS $147,474 for the equivalent of 44 Olympic swimming pools of water. No fines were levied. The state of Georgia was in moderate to severe drought at the time, with the Governor having declared a state of emergency over wildfires.⁵

This is a single documented case. It is representative of a pattern.

The commodity cascade:

The Strait of Hormuz closure beginning February 28, 2026 disrupted supply chains for helium — Qatar’s Ras Laffan facility produces one third of global supply. Helium is critical for semiconductor fabrication. Chip production faces potential rationing precisely as data center demand peaks. Natural gas prices are up 65%, exploding the operating cost assumptions of facilities that run on methane turbines because the grid cannot supply them. Every data center runs on power. Power runs on natural gas. Natural gas runs through the Strait.


Part Six: The Austrian Economics Analysis

Murray Rothbard identified the foundational principle in 1959:

“This fact of reality, then, must be faced: if there are to be more scientists, or more scientific research, then there must be less people and less resources available for producing all the other goods and services of the economy. The crucial question, then, is: how much? How many people and how much capital are to be funneled into each of the various occupations, including science and technology?”

Replace “scientists” with “data centers” and the sentence is the thesis of this article. Capital directed to GPU clusters is capital not directed to TFET research, to STT-MRAM development, to neuromorphic chip software ecosystems, to the unknown alternative that may render the current architecture irrelevant within the decade. The opportunity cost is real. It appears nowhere in any hyperscaler earnings call.

The Hubble Telescope parallel:

$1.5 billion for Hubble, launched with a flawed mirror, requiring a $700 million repair mission. Within a decade, adaptive optics ground telescopes produced superior imagery at a fraction of the cost. The market, given price signals and time, found better solutions. The lesson is not that Hubble was useless — it produced extraordinary science. The lesson is that heroic capital deployed ahead of superior solutions creates stranded assets, and that the superior solutions often arrive faster than the heroic capital expects.

Applied to the hyperscaler buildout: $800 billion committed to GPU infrastructure against assumptions of continued scaling, in a physical environment where grid connections require seven years, against architectural alternatives that may render the current approach obsolete before the debt matures.

The Victorian Television problem:

A thought experiment often used in economics classes asks: could a 19th-century superpower — say, the British Empire — have launched a Manhattan Project-style program in the 1870s to build television 50 years ahead of its commercial rollout? The physics were available in principle: Maxwell’s equations, Nipkow’s scanning disc, early photoelectric research. The answer is yes, with sufficient capital — but at a cost so disproportionate to any conceivable return, and with such catastrophic opportunity cost to every other social need, that no rational capital allocator would have sanctioned it had the true cost been known.

Rothbard again, with uncomfortable precision:

“The fundamental atomic discoveries had been made by academic scientists working with simple equipment. One of the greatest of these scientists has commented: ‘we could not afford elaborate equipment, so we had to think.’ Furthermore, virtually the entire early work on atomic energy, up to the end of 1940, was financed by private foundations and universities.”

“We could not afford elaborate equipment, so we had to think.” Eight words that describe what the $800 billion buildout is eliminating. Constrained capital forces genuine innovation. Unlimited capital — distorted by a decade of near-zero interest rates, government subsidies, and regulatory capture — funds the predetermined architecture. The TFET researchers, the spintronics teams, the SSM architects are working with limited resources. They are being forced to think. The GPU clusters are being scaled. These are not equivalent processes.

The misallocation mechanism:

Near-zero interest rate capital from 2009 to 2022 inflated the initial AI investment beyond what genuine price signals would have supported. Government contracts and regulatory capture direct capital to incumbents. Benchmarks designed to measure what existing GPU architectures do well create circular validation — the architecture that wins the benchmark is the architecture that gets funded, which is the architecture that the benchmark was designed to measure. Network effects and software ecosystem lock-in prevent rational reallocation even after the architectural limits become visible. The $18 billion Colossus facility cannot be repurposed for neuromorphic workloads. Concrete cannot be unpoured.

📖 Read More: The Misallocation Mechanism in Detail The Austrian business cycle theory applied to the AI buildout: near-zero rate capital (2009-2022) enabled cash-flow-negative companies to raise unlimited capital against projected future revenues. The hyperscaler buildout is the most capital-intensive expression of that malinvestment cycle. The Strait closure, recession dynamics, and rising interest rates are the moment when the distortion becomes visible — the Austrian “cluster of errors” made manifest simultaneously. Source: Rothbard, Science, Technology, and Government (Mises Institute, 1959/2004); Mises, Human Action (1949).


Part Seven: Who Benefits From the Misallocation

The $800 billion hyperscaler AI buildout is described, consistently and deliberately, as private investment. This framing is false in almost every particular that matters.

The opportunity cost of the buildout does not appear in any prospectus. It appears in utility bills, water pressure gauges, and the electricity shutoff notices sent to 4 million American households in 2025.

The electricity tax nobody voted for:

Before 2019, residential electricity prices in the United States had been essentially flat at around 13 cents per kilowatt-hour for more than a decade. By end of 2025, electricity prices had jumped 6.9% year over year — more than double the headline inflation rate of 2.9%. In areas with high concentrations of data centers, wholesale electricity prices rose 267% over five years. Utilities requested more than $29 billion in rate increases in the first half of 2025 — double the amount requested in the first half of 2024. Total outstanding household utility bill debt reached $25 billion by June 2025. Utility shutoffs reached an estimated 4 million in 2025.⁶

Goldman Sachs projects data centers will make up 40% of electricity demand growth through the end of the decade — lowering disposable income, dragging down consumer spending, and slowing economic growth. The “private capital” building these data centers is externalizing its infrastructure costs onto the 40 million American households facing rate increases. This is not a market outcome. It is regulatory structure that socializes the costs of private capital deployment.

The tax incentive architecture:

Most states competing for data center investment offer property tax abatements running 10 to 20 years, sales tax exemptions on equipment, and accelerated depreciation. Virginia — home to the world’s largest data center concentration — has forgone billions in sales tax revenue through its data center exemption. The CHIPS Act provided $52 billion in direct subsidies to semiconductor manufacturers, reducing the input cost of the GPU supply chain that data centers depend on.

Rothbard in 1959:

“A subsidy mulcts taxpayers in order to give a special grant to the favored party. It thereby adds to the ratio of government activity in the economy, distorts productive resources, and multiplies the dangers of government control and repression.”

The CHIPS Act is not a market outcome. It is a government decision about which technology architecture to favor, made by people without the price signal information that would allow them to make that decision rationally.⁷

Who specifically benefits:

Nvidia holds a $3+ trillion market cap built entirely on GPU dominance. Every dollar of hyperscaler capex flowing to GPU clusters rather than neuromorphic or photonic alternatives validates that market position. The circular validation is not conspiracy — it is incentive structure.

Power utilities earn regulated returns on capital investment regardless of whether the data centers they serve produce economic value. The stranded asset risk falls on developers and, through infrastructure cost-shifting, on ratepayers.

Bond markets earn fees on issuance and interest on debt. Morgan Stanley reports US-secured data center debt paper issuance reached $25.4 billion in 2025 alone — an 1,854% increase since 2022. The bond market does not bear the risk that the architecture is wrong.

Defense contractors — Palantir, Anduril, and others — hold AI contracts built around the current architecture. The defense procurement bureaucracy will fund what it funded last year. The architecture that wins government AI contracts is the architecture that already has government AI contracts.

The emergent vs. designed distinction:

Nobody at Nvidia sat in a room and decided to suppress neuromorphic computing. Nobody conspired to shift infrastructure costs onto residential ratepayers. The Austrian framework does not require intent. It requires only that incentive structures exist and that rational actors respond to them. The aggregate result — 160 times more capital flowing to the less efficient architecture, infrastructure costs socialized onto ratepayers, 30 million gallons of water consumed during a drought without billing — emerges from the structure without requiring anyone to have planned it.

The Sámi didn’t need the Birkarlar to conspire against them. The Birkarlar simply showed up where the Sámi had to be. The data center developers simply built where the grid could support them, consumed what wasn’t being metered, and externalized what the regulatory structure allowed them to externalize. The structure produces the extraction. The intent is irrelevant.


Part Eight: The Transistor Lesson

In 1947, Shockley, Bardeen, and Brattain invented the transistor at Bell Labs. RCA, which dominated vacuum tube manufacturing and had enormous capital invested in production, supply chains, and manufacturing expertise, saw little commercial potential. The vacuum tube industry had every incentive not to take it seriously. Texas Instruments — a relatively small company with nothing to lose — commercialized it. Sony licensed early and built portable radios when everyone said the market didn’t exist. Within twenty years the vacuum tube industry was gone.

The constraint was not capital. It was independence from the capital that needed the vacuum tube to remain correct.

The modern equivalent: the breakthrough that renders GPU-based transformer scaling obsolete will not come from Nvidia, Google, Microsoft, or Amazon. It will come from a team that cannot get funding from any of them because their work would make those investments obsolete.

Ilya Sutskever left OpenAI to start Safe Superintelligence with a tiny team. Yann LeCun spent years at Bell Labs on convolutional networks before anyone cared. The “Attention Is All You Need” paper had eight authors. The pattern is consistent: the paradigm shift arrives from outside the entity with the most capital, because the entity with the most capital has the strongest incentive to ensure the current paradigm continues.

Nvidia has a $3 trillion market cap to protect. That is also a $3 trillion reason not to fund the architecture that makes GPUs obsolete.

The researchers to watch are not the ones winning hyperscaler grants. They are the ones who cannot get hyperscaler grants — the European academic labs receiving EuroHPC funding without US hyperscaler conditions attached, the Tsinghua neuromorphic researchers not optimizing for Nvidia compatibility, the team in some underfunded university spinout working on a different question entirely.

📖 Read More: Where Breakthroughs Actually Come From Jewkes, Sawers & Stillerman, “The Sources of Invention” (1958/1969): surveyed 61 major 20th-century inventions; more than half were the work of individual inventors with limited resources. Rothbard cites this research as the empirical demolition of the “only large-scale team research produces breakthroughs” myth. The AI equivalent: the transformer architecture (8 authors), the GAN (Goodfellow, in a single evening), backpropagation (Hinton, with minimal funding for years). The pattern holds.


Part Nine: The Unknown Unknown

The visible alternatives — TFET-based neuromorphic, STT-MRAM, photonic computing, in-memory processing, SSMs — represent what is currently legible within the existing research framework. They address known limitations of the transformer through known alternative approaches.

The actual breakthrough may be none of these. History suggests the paradigm shift is rarely an improvement on the existing approach. It is usually a different question entirely.

The steam engine was not improved into the electric motor. The vacuum tube was not improved into the transistor. The mainframe was not improved into the microprocessor. In each case, the breakthrough arrived by asking a different question, not by scaling the answer to the previous one.

The $1.1 trillion bet assumes the current question — how do we scale transformers? — is the right question. The breakthrough will probably start by asking a different one. What that question is cannot be known. What can be known is that the entities with the most capital have the strongest incentive not to find out.

This section requires genuine epistemic humility. Not “here is what is coming” but “here is why we cannot know what is coming — and why that uncertainty is precisely what the current capital allocation is eliminating.” The neuromorphic researchers watching GPU money consume all available talent and infrastructure are the opportunity cost of the $1.1 trillion. So is the unknown breakthrough that the capital crowded out.


Part Ten: The Road Not Taken

Five years from now, somewhere in the American interior or the Arizona desert or a reclaimed industrial site in the Midwest, there will be rows of gleaming server racks, never powered on. Cooling systems installed but idle. Fiber optics dark. Billions of dollars in concrete, copper, and silicon sitting in facilities that were built before the grid connection was secured, before the transformer lead times were understood, before the helium supply chain disruption was factored in, before the architectural ceiling became undeniable.

The debt that financed those racks will still be accruing interest. The GPUs inside will be three generations obsolete. The software ecosystem that was supposed to generate the revenue that was supposed to service the debt will have moved to an architecture nobody funded.

This is not a prediction. It is arithmetic. The physical constraints — grid connections requiring seven years, transformer lead times of five years, 2,100 gigawatts queued for interconnection — are documented facts, not forecasts. The architectural constraints — quadratic scaling complexity, DeepSeek’s demonstration that algorithmic efficiency can substitute for hardware scale, the documented convergence of TFET and STT-MRAM toward neuromorphic competitive parity — are visible to anyone who examines the literature. The financial constraints — $25.4 billion in data center debt paper issued in 2025 alone, an 1,854% increase since 2022, against GPU economic lifespans of three to five years — are in the bond market filings.

The hyperscaler bet assumes three things simultaneously: that transformer architecture scales indefinitely, that the power grid expands fast enough, and that no architectural paradigm shift occurs within the five-plus year build timeline. The first is questionable. The second is demonstrably constrained. The third is historically unlikely.

Some of what is being built will produce value. Much may be stranded. But the deeper point — the one that connects this article to Chapter 11 — is that the version of AI being built is not the version that would have emerged from undistorted discovery.

A different capital allocation, following genuine price signals, might have produced a different AI. More efficient. More distributed. Less amenable to the surveillance and control architecture that Chapter 11 documents. The TFET researchers, working on hardware that operates at brain-compatible frequencies with brain-approaching efficiency, might have been funded. The STT-MRAM teams, building non-volatile neuromorphic memory that doesn’t require constant refresh power, might have reached commercial scale. The combination — processing that works like a neuron, memory that works like a synapse — might have produced something that genuinely resembles intelligence rather than something that genuinely resembles a very large autocomplete engine consuming the power of a small nation.

We will never know. The $1.1 trillion is already committed.

Rothbard wrote in 1959, in a monograph that went unpublished for forty-five years:

“There is one and only one alternative to voluntary directions under a free price system: and that is government dictation. And this dictation is not only bad because it violates the tradition of individual freedom and free enterprise… it is also bad because it is inevitably inefficient and self-destructive. For while government intervention can and does hamper the economic system in its job of satisfying consumer demand, it cannot force the economy to follow its own demands efficiently.”

The hyperscaler AI buildout is not technically government dictation. But it is capital allocation driven by entities whose returns depend on government contracts, regulatory capture, chip export controls, and power grid access granted by political decision — combined with a decade of monetary policy that suppressed the price signals that would have revealed the true cost of the bet before it was made.

The price signal that would have directed capital toward TFET-based neuromorphic research was suppressed. The price signal that would have said “the grid cannot support this” arrived too late, after the concrete was poured. The price signal that would have revealed the architectural ceiling is arriving now, through DeepSeek’s efficiency demonstration and the scaling law debates, while $1.1 trillion in annual commitments continues to flow.

Genuine market discovery, given undistorted price signals and time, would have found more efficient paths to genuine intelligence.

It was given neither.

The neuromorphic researchers, the photonic computing teams, the TFET and spintronics labs — working on genuinely more efficient approaches with 160 times less capital — are the opportunity cost that never appears in the official accounting. So is whatever the Unknown Unknown turns out to be.

The market would have found it.

Instead, we built the Victorian Television.


Notes

¹ Scale of investment: Hyperscaler capex commitments from company earnings calls and guidance, Q4 2025 and Q1 2026. Stargate Project: OpenAI/SoftBank/Oracle announcement, January 2026. $5 trillion projection: SoftBank/OpenAI joint statement. February 2026 funding record: Pitchbook AI funding tracker. OpenAI $110B raise: Wall Street Journal, February 2026.

² TFET physics: The 60mV/decade thermodynamic limit (Boltzmann: kT/q ln10 at 300K) is standard semiconductor physics. Room-temperature quantum tunneling documentation: Fowler-Nordheim (MOS), Zener (PN junctions), resonant tunneling diodes — all documented in IEEE literature. TFET neuromorphic efficiency: “An ultra energy-efficient hardware platform for neuromorphic computing enabled by 2D-TMD tunnel-FETs,” Nature Communications (2024); spiking neuron energy 0.67 fJ per spike vs CMOS baseline. Subthreshold slopes below 60mV/dec: Global Insight Services TFET Market Report 2026.

³ STT-MRAM commercial status: Everspin Technologies product documentation; Avalanche Technology shipment reports 2025. AI in Spintronics market projection $1.84B→$7.11B (2025-2030): The Business Research Company, 2026. Neuromorphic Spintronics market $141.2M→$1.01B (2024-2033): Grand View Research, 2025.

⁴ Power grid constraints: Gartner “Top Strategic Technology Trends” 2026 (40% restriction forecast); Bloom Energy Data Center Power Demand Report, January 2026 (80→150 GW projection); interconnection queue data: LBNL Interconnection Data, 2025; transformer lead times: North American Electric Reliability Corporation (NERC) Supply Chain Risk Assessment 2025.

⁵ QTS/Georgia water story: Tom’s Hardware, “Georgia Data Center Used 29 Million Gallons of Water Without a Bill,” May 10, 2026; TechSpot, same date; Daily Mail, May 11, 2026. QTS ownership by Blackstone per all sources. Fine: $147,474 per Fayette County Water System letter, May 15, 2025. Drought/wildfire emergency: Governor Kemp declaration per Tom’s Hardware.

⁶ Electricity cost data: Goldman Sachs research note, February 2026, cited in CNBC; Bloomberg “AI Data Centers Are Sending Power Bills Soaring,” September 2025 (267% wholesale price increase figure); EESI “Data Center Power Demands Contributing to Higher Energy Bills,” February 2026 ($25B utility debt, 4M shutoffs, $29B rate increase requests); Consumer Reports “AI Data Centers: Big Tech’s Impact on Electric Bills,” March 2026.

⁷ Tax incentives: Virginia data center sales tax exemption documented in Virginia Mercury; general structure per National Conference of State Legislatures data center incentive survey. CHIPS Act: $52.7 billion in direct subsidies per Congressional Budget Office. Rothbard quote: Science, Technology, and Government (Mises Institute, 2015 edition), p. 101. Written 1959, first published Mises Institute 2004. Available: cdn.mises.org/Science%20Technology%20and%20Government_0.pdf

⁸ Data center debt: Morgan Stanley data center securitization report, cited in industry press, 2025. 1,854% increase figure: same source, 2022 baseline. GPU economic lifespan 3-5 years vs CPU 5-7 years: CNN Business “The Big Wrinkle in the Multi-Trillion Dollar AI Buildout,” 2025. Annual GPU failure rate ~9% vs CPU ~5%: same source.


Read More: Technical Appendix

The Rothbard Foundation

Murray N. Rothbard, Science, Technology, and Government (written March 1959; first published Mises Institute, 2004; 2015 edition). The monograph went unpublished for 45 years — written on commission in 1959 in response to Sputnik hysteria, it addresses with uncanny precision the arguments being made in 2026 for massive state-directed AI infrastructure spending. Available free: cdn.mises.org/Science%20Technology%20and%20Government_0.pdf

Key sections for this article: Chapter 1 (General Principles — the opportunity cost argument); Chapter 4 (The Alleged Scarcity of Scientific Research — the Jewkes et al. individual inventor findings); Chapter 7 (Atomic Energy — “we could not afford elaborate equipment, so we had to think”); Chapter 9 (What Should Government Do — the tax exemption vs subsidy distinction).

The Architectural Debate

“Attention Is All You Need,” Vaswani et al., NeurIPS 2017 — the foundational transformer paper, 8 authors.

DeepSeek R1 technical report, January 2026 — the efficiency demonstration that challenged scaling law assumptions.

Mamba: “Linear-Time Sequence Modeling with Selective State Spaces,” Gu & Dao, 2023 — the SSM alternative achieving linear complexity.

V-JEPA 2, Meta AI, 2026 — JEPA architecture achieving 65-80% success on robotics tasks with fundamentally different approach to intelligence.

The Neuromorphic Status

Intel Hala Point: 1.15 billion neurons, 128 billion synapses, 2,600W max. Commercial neuromorphic market 2025: ~$50M revenue (vs $130B GPU market).

BrainChip AKD1000: shipping commercial neuromorphic processor. Innatera Pulsar: deployed in industrial applications. POLYN Technology NASP: first silicon-proven analog neuromorphic processor.

TFET-neuromorphic integration: “An ultra energy-efficient hardware platform for neuromorphic computing enabled by 2D-TMD tunnel-FETs,” Nature Communications 2024. STT-MRAM neuromorphic memory: Everspin Technologies; IBM Research neuromorphic memory publications.

The Physical Constraints

LBNL “Queued Up: Characteristics of Power Plants Seeking Transmission Interconnection,” 2025 — 2,100 GW in queue.

NERC “Long-Term Reliability Assessment” 2025 — transformer lead times, grid aging statistics.

Bloom Energy “Data Center Energy Demand Forecast” January 2026 — 80→150 GW projection.

Further Reading

  • Rothbard, M.N. (1962). Man, Economy, and State. Mises Institute.
  • Rothbard, M.N. (1970). Power and Market. Mises Institute.
  • Hayek, F.A. (1945). “The Use of Knowledge in Society.” American Economic Review.
  • Mises, L. (1949). Human Action. Yale University Press.
  • Jewkes, J., Sawers, D. & Stillerman, R. (1958). The Sources of Invention. Macmillan.
  • Thompson, A. (1996). “An Evolved Circuit, Intrinsic in Silicon, Entwined with Physics.” Proceedings of ICES 1996.

Kyle Davenport writes as the Rothbardian in Dallas. He is the author of Breaking the Spell (2026) and Dissolution Too (2026), available on Amazon. Subscribe at rothbardianindallas.substack.com

This article is Part Two of The Future of AI series. Part One: “Beyond Silicon: Toward Living, Evolving, Self-Healing Computation” (Dissolution Too, Chapter 11). Part Three forthcoming: “The Real Burden of Government: What the Lapps Knew That We’ve Forgotten.”


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *