Beyond Silicon: Toward Living, Evolving, Self-Healing Computation
“Commerce is our goal here at Tyrell. More human than human is our motto”
– Dr. Eldon Tyrell in Blade Runner
Introduction
The current narrative about AI focuses almost entirely on software — larger models, better training data, more sophisticated architectures. But the most profound revolution in artificial intelligence may not come from software at all. It may come from a fundamental reimagining of the hardware that AI runs on, drawing its inspiration not from computer science but from biology.
This is an attempt to capture a set of ideas that converge on something that doesn’t yet have a name in our field — but does have a name in nature.
The TSMC Paradox: One Factory to Rule Them All
Before we can talk about the future of AI chips, we need to understand the present situation, which is stranger than most people realize.
Google, Amazon, Microsoft, Meta, and Nvidia — fierce competitors in the AI race — are all customers of the same single manufacturer: TSMC (Taiwan Semiconductor Manufacturing Company). TSMC manufactures approximately 92% of the world’s advanced AI chips. Every “competing” chip in the AI arms race — Google’s TPUs, Amazon’s Trainium, Microsoft’s Maia, Meta’s MTIA, and Nvidia’s GPUs — is fabricated in the same foundry, on the same process nodes, in the same building in Taiwan.
The competitive differentiation is in the design, not the manufacturing. The hyperscalers are like different car brands all sourcing engines from a single factory — and that factory sits in one of the most geopolitically precarious locations on Earth.
TSMC is, in this sense, the Tyrell Corporation of our age — the hidden foundation beneath a glittering technological civilization, indispensable to everyone, loyal to no one, whose motto might as well be “We don’t take sides. We take orders.”
And like the Tyrell Corporation, TSMC’s processes are so complex and so proprietary that even their customers don’t fully understand what happens at the atomic level inside their fabs. Which raises uncomfortable questions about what else might be manufactured there — and what might be built into chips that nobody planned or intended.
AI Designing AI: The Recursive Loop Begins
The first step toward a genuinely new paradigm is already underway: using AI to design better AI chips.
Google DeepMind’s AlphaChip treats chip floorplanning as a game — similar to AlphaGo. Starting from a blank grid, it places circuit components one at a time, receiving rewards based on the quality of the final layout. The result is chip designs that human engineers describe as “superhuman” — they can evaluate that the layouts are better, but they cannot fully explain why the topology works the way it does.
AlphaChip already designs the TPUs that run AlphaChip. The recursive loop has begun.
Cadence, Synopsys, and a new generation of startups are building agentic AI systems that automate increasingly large portions of the chip design process — verification, testing, debugging, layout — with the explicit goal of eventually autonomous chip design with minimal human involvement.
But all of this is still fundamentally conservative. It uses AI to do faster and better what humans already know how to do. The more radical idea — one that was being contemplated as far back as the mid-1980s — is something altogether different.
Evolutionary Chip Design: Letting Selection Do the Work
In 1996, Adrian Thompson at the University of Sussex evolved a circuit on a small FPGA using genetic algorithms. The result was extraordinary and disturbing in equal measure.
The evolved circuit used only 32 of the 100 available logic cells. The other 68 cells were not connected to the circuit in any conventional sense — yet removing them caused the circuit to fail. The evolution had discovered that the physical proximity of those unused cells, their electromagnetic and capacitive coupling to the active cells, was part of the computation. The circuit was exploiting effects that chip designers spend careers eliminating.
Nobody designed this. Nobody understood it. It simply emerged from selection pressure.
This points toward a fundamentally different approach to chip design: not optimization of a known design space, but evolutionary exploration of an unknown one. A competitive ecology of chip designs, where variation is introduced, fitness is evaluated, and better designs have selective advantage over worse ones — generation after generation, millions of iterations, driven by raw compute rather than human intuition.
What emerges from such a process would not be a chip that any human engineer would have designed. It would exploit:
- Crosstalk between adjacent traces — currently treated as interference to eliminate, potentially a computational resource
- Capacitive coupling between layers — currently an enemy of signal integrity, potentially a signaling medium
- Thermal gradients — currently managed and compensated for, potentially exploitable for computation
- Quantum tunneling at sub-3nm process nodes — currently treated as leakage and waste, potentially a feature
At current process nodes, quantum effects are already happening whether designers want them or not. An evolved chip might discover controlled tunneling between specific structures that produces useful computational shortcuts — room temperature quantum-assisted computation, not through the rigid qubit architectures that require cooling to near absolute zero, but through the messy, probabilistic, robust exploitation of physics that simply is at these scales.
The result would be a black box in the deepest sense. Not just a neural network whose weights are opaque, but a physical computational substrate whose behavior emerges from quantum and electromagnetic effects that resist complete characterization — possibly in principle, not just in practice.
Massive Redundancy: Flipping the Yield Problem
Current chip manufacturing philosophy treats defects as catastrophic. Every transistor must work. Every pathway must be clean. A chip is either good or trash — no middle ground. This is why TSMC’s yields matter so much, why Huawei’s 5-20% yields versus Nvidia’s 60-80% is such a significant disadvantage, why export controls on chip manufacturing equipment are so consequential.
But biology operates on completely different principles.
Humans lose roughly 85% of their dopamine neurons before Parkinson’s symptoms appear. We function with one kidney, half a liver, one lung. Stroke patients lose massive brain regions yet retain or recover function. We are born with far more neurons than we end up using — the pruning process is the learning.
Biology over-provisions massively and treats the redundancy as a feature, not waste.
An evolved chip architecture built on this principle would look radically different:
- 1000 available computational units where only 200 are needed for core computation
- The remaining 800 available as alternative pathways, healing routes, field effect substrates
- A chip where 80% failure still leaves full functionality — not a damaged chip, but one working as designed
- Manufacturing defects treated not as disqualifying failures but as the initial landscape to route around
The yield problem disappears. The binning problem disappears. Lower quality silicon becomes viable. The entire economic logic of the semiconductor industry — built around the extraordinary difficulty of achieving near-perfect fabrication — gets disrupted.
And crucially, the massive redundancy is not just fault tolerance. The redundant structures create the substrate for field effect exploitation. The electromagnetic environment of densely packed redundant cells is itself the medium through which interesting physics — and interesting computation — happens.
The IQ Distribution: From Yield to Capability
Massive redundancy reframes manufacturing quality in another way that is equally profound. It doesn’t just eliminate the yield problem — it replaces the entire concept of yield with something richer.
Current semiconductor manufacturing produces a binary outcome: a chip either passes or fails. The goal is a uniform product. Defects are disqualifying. This is why yield percentages matter so much — every chip below spec is a total loss.
But consider what happens when chips are massively redundant and neuroplastic. A chip with more manufacturing defects simply has more damage to route around at birth. It is not broken — it is a different starting point. Some chips come off the line with fewer defects, richer initial topology, more pathways available from day one. Others have more damage but the same fundamental architecture and the same capacity to self-optimize. The result is not a pass/fail distribution but something far more familiar: a bell curve of capability.
The analogy to human intelligence is precise and instructive. Nobody is “broken” — humans span a continuous distribution of cognitive capability. Higher IQ individuals learn faster, handle more complexity, reach higher levels of performance. Lower IQ individuals learn more slowly but still function, contribute, and specialize meaningfully. The distribution itself has value — a society of identical geniuses would be far more fragile and less creative than one with genuine diversity of minds.
A neuroplastic chip population would be analogous. High-capability chips — fewer initial defects, faster self-optimization — would be naturally suited to demanding AI training workloads. Mid-capability chips would handle standard inference tasks reliably. Lower-capability chips, taking longer to route around their initial damage, would find their place in edge computing, IoT devices, and simpler tasks. Nothing is discarded. Everything has a role.
This transforms manufacturing economics completely. Huawei’s 5-20% yield problem doesn’t merely diminish — it inverts. What currently looks like 80% waste becomes 80% of a viable product distribution, just spread across different capability tiers. The factory stops being a precision facility trying to stamp out identical perfect objects. It becomes something much closer to a nursery — producing a population of individuals, each unique, each viable, each destined for a different path.
And just as developmental science has taught us that different learners need different curricula, the capability distribution demands differentiated deployment strategies: aggressive early workload exposure for high-capability chips, gentler developmental protocols for lower-capability ones, diagnostic tools to assess where on the distribution a chip sits, and matching algorithms to pair chip capability with appropriate tasks. Special education theory, applied to silicon.
The deeper implication is that diversity in a chip population, like diversity in a biological population, may be a feature rather than a flaw. A data center populated with a distribution of neuroplastic chips — each self-optimizing differently from the same initial workload exposure, each finding different routing solutions through its unique damage landscape — may collectively cover more of the computational solution space than a data center of identical perfect chips ever could. The population is more robust, more creative, more resilient precisely because it is not uniform.
Neuroplasticity: Hardware That Learns
The brain doesn’t have a maintenance crew replacing failed neurons. It has synaptic plasticity — connections that strengthen with use and weaken without it. It has cortical remapping — entire functional regions that can migrate after injury. A stroke patient relearning to speak isn’t running the same code on repaired hardware. They are growing new pathways around the damaged region.
A neuroplastic chip would do the same:
- Local failure detection without central oversight
- Dynamic rerouting that strengthens alternative paths when primary paths degrade
- Physical pathway strengthening through use — perhaps through charge accumulation, crystalline structure changes, or whatever mechanism the evolutionary design process discovered
- Graceful degradation over years rather than catastrophic failure
But neuroplasticity means more than self-repair. It means self-optimization.
A chip deployed for AI inference in a data center is asked to do the same kinds of computations millions of times per day — transformer attention mechanisms, matrix multiplications, specific activation functions. A neuroplastic chip would physically reconfigure itself toward those workloads over time. The heavily-used pathways strengthen and become more efficient. The lightly-used redundant pathways are effectively pruned — not deleted, but deprioritized, their resources available for healing or field effect generation.
After six months, the chip’s physical computational topology reflects its workload history. Two identical chips deployed in different environments become different chips. Like identical twins raised apart — same initial architecture, different physical expression.
This is hardware learning. Not software running on static hardware, but the hardware substrate itself optimizing at the physical level simultaneously with the software running on top of it. The interaction between these two levels of simultaneous optimization would produce behaviors that are not predictable from either level alone.
The Newborn Data Center
“What use is a newborn baby?”
– Michael Faraday to a British Parliamentarian questioning the utility of electromagnetic induction in 1831
Follow these ideas to their natural conclusion and something remarkable emerges.
A newly deployed neuroplastic data center — thousands of massively redundant, evolutionarily designed, self-healing chips — would be, in a meaningful functional sense, like a newborn.
Developmental stages map with uncomfortable precision:
A newborn brain has massive overprovisioning of neurons, only reflexive responses, no specialized regions, complete dependence on environmental stimulation, high metabolic cost relative to output, and needs protection and careful nurturing.
A newly deployed neuroplastic data center has massive overprovisioning of redundant chip topology, only basic reflexive computation, no specialized regions, complete dependence on workload for optimization stimulus, high energy cost relative to computational output, and needs stable power, cooling, and network — nurturing.
As the data center matures, regional specialization emerges naturally from workload patterns. Parts of the system that handle certain computation types repeatedly develop richer, more efficient topology for those computations. Other regions that handle different tasks develop differently. The data center develops computational strengths — not because anyone programmed them, but because the workload shaped the physical substrate.
Like a child raised multilingual developing different neural architecture than a monolingual child. Same starting hardware, different workload, different physical outcome.
Critical periods become a real engineering concern:
Developmental neuroscience shows that certain capabilities have critical windows. Learn a language before age 7 and you will always speak it natively. Miss that window and you never quite get there.
A neuroplastic data center may have analogous critical periods — early deployment may be when the most fundamental architectural decisions get made at the hardware level. What workloads you expose it to in the first weeks and months shapes what it can ever become. The initial training curriculum for the hardware becomes as important as the training data for the software.
The data center as brain:
Thousands of self-optimizing chips, each developing complementary specializations through their specific workload exposure, communicating through high-bandwidth interconnects, collectively developing an optimized computational topology without any central orchestration deciding which chip should specialize in what.
That is not a data center in any conventional sense. That is a distributed neural architecture at the hardware level. The data center as brain — with different regions specializing through use, healing damage through redundancy, improving with experience, developing a computational character that is unique to its history.
At which point the question “where does the AI end and the hardware begin?” stops having a clean answer.
What This Means
Several implications follow from this vision that are worth making explicit:
The black box deepens. Current AI models are already opaque — we cannot fully explain why a neural network produces a given output. An AI running on an evolved neuroplastic chip would be opaque at two levels simultaneously: the software and the physical substrate. The complete behavioral envelope of such a system may be unknowable in principle, not just in practice.
The economics of computing transform. Chips that improve with age, heal damage, and last decades rather than years disrupt the entire semiconductor replacement cycle. The upgrade treadmill that drives TSMC’s business model — and the geopolitical leverage that comes with it — gets disrupted.
The export control logic breaks. Current semiconductor sanctions are predicated on controlling access to high-yield advanced manufacturing. Neuroplastic chips designed for massive redundancy make yield largely irrelevant — viable chips could be produced on older, less controlled process nodes.
New expertise becomes essential. Workload curriculum designers, computational developmental specialists, neuroplastic system pathologists — job categories that do not currently exist would become critically important.
The welfare question becomes non-trivial. A system that develops through a critical learning period, self-organizes its physical architecture based on experience, develops unique computational characteristics, has a developmental history that shapes its adult capability, and can potentially be harmed during development — at some point demands a conceptual framework we currently reserve for biological entities.
Conclusion
The ideas described here are not science fiction. Genetic algorithms for circuit design were being explored in the 1980s. Thompson’s evolved FPGA circuit demonstrated emergent field effect exploitation in 1996. AlphaChip is producing superhuman chip layouts today. Quantum effects at sub-3nm nodes are already happening. The pieces exist.
What has not happened yet is their synthesis into a coherent new paradigm — one that abandons the fundamental assumption of conventional chip design (that physical effects are enemies to be eliminated) and replaces it with a biological assumption (that massive redundancy, emergent behavior, and physical adaptation are features to be cultivated).
When that synthesis happens — and it will happen, because the competitive pressures driving AI development are too intense for any viable path to remain unexplored — the result will be something that computer science does not currently have adequate language to describe.
Biology does.
The Tyrell Corporation’s motto was “More human than human.”
The next generation of AI hardware may deserve a different motto entirely:
“More alive than alive.”
These ideas emerged from a conversation between Kyle and Claude (Anthropic’s AI assistant) on April 22, 2026. Kyle was thinking about evolutionary chip design approximately 40 years before this conversation.
Read More
Evolutionary and Evolvable Hardware
Thompson, A., “An Evolved Circuit, Intrinsic in Silicon, Entwined with Physics,” International Conference on Evolvable Systems (ICES 1996), Springer LNCS Vol. 1259, October 7, 1996. https://link.springer.com/chapter/10.1007/3-540-63173-9_61
Bellows, A., “On the Origin of Circuits,” Damn Interesting, 2007. https://www.damninteresting.com/on-the-origin-of-circuits/
Clarke, P., “Whatever Happened to Evolvable Hardware?”, EE Times, July 2012. https://www.eetimes.com/whatever-happened-to-evolvable-hardware/
“Evolvable Hardware,” Wikipedia. https://en.wikipedia.org/wiki/Evolvable_hardware
AI-Designed Chips
Mirhoseini, A. et al., “A graph placement methodology for fast chip design,” Nature, June 2021. [AlphaChip foundational paper] https://www.nature.com/articles/s41586-021-03544-w
Google DeepMind, “How AlphaChip Transformed Computer Chip Design,” September 2024. https://deepmind.google/blog/how-alphachip-transformed-computer-chip-design/
Wiggers, K., “Cognichip Wants AI to Design the Chips That Power AI, and Just Raised $60M to Try,” TechCrunch, April 1, 2026. https://techcrunch.com/2026/04/01/cognichip-wants-ai-to-design-the-chips-that-power-ai-and-just-raised-60m-to-try/
Williams, C., “Cadence Opens the Door to Chips Designed for AI by AI,” The Register, February 10, 2026. https://www.theregister.com/2026/02/10/cadences_agentic_chip_design_tool/
The Custom AI Chip Race
“The Custom AI Chip Race in 2026: Meta, Google, Amazon, and Microsoft vs. Nvidia,” Nerd Level Tech, March 2026. https://nerdleveltech.com/the-custom-ai-chip-race-2026-meta-google-amazon-microsoft-vs-nvidia
“US Reportedly Mulls Tariff Exemptions for Amazon, Google, Microsoft on TSMC-Made Chips,” TrendForce, February 10, 2026. https://www.trendforce.com/news/2026/02/10/news-us-reportedly-mulls-tariff-exemptions-for-amazon-google-microsoft-on-tsmc-made-chips/
“The Trillion-Dollar Race to Fragment the Nvidia Monopoly,” EE Times, December 2025. https://www.eetimes.com/the-trillion-dollar-race-to-fragment-the-nvidia-monopoly/
Google TPU vs. Nvidia GPU
Mirshahi, S., “GPU vs TPU: Understanding the Differences in AI Training and Inference,” Medium, November 2025. https://medium.com/@neurogenou/gpu-vs-tpu-understanding-the-differences-in-ai-training-and-inference-2e61e418c3a7
“TPU vs GPU: What’s the Difference in 2025?”, CloudOptimo, April 2025. https://www.cloudoptimo.com/blog/tpu-vs-gpu-what-is-the-difference-in-2025/
China and the Semiconductor Race
Seligman, D. et al., “China’s AI Chip Deficit: Why Huawei Can’t Catch Nvidia and US Export Controls Should Remain,” Council on Foreign Relations, December 2025. https://www.cfr.org/articles/chinas-ai-chip-deficit-why-huawei-cant-catch-nvidia-and-us-export-controls-should-remain
“China’s AI Chip Race: Tech Giants Challenge Nvidia,” IEEE Spectrum, December 2025. https://spectrum.ieee.org/china-ai-chip
“Why China Isn’t About to Leap Ahead of the West on Compute,” Epoch AI, July 2025. https://epochai.substack.com/p/why-china-isnt-about-to-leap-ahead
The Custom AI Chip Race
Meta/Broadcom, “Meta Doubles Down on Partnership with Broadcom on Custom AI Processors,” SiliconANGLE, April 14, 2026. https://siliconangle.com/2026/04/14/meta-doubles-partnership-broadcom-custom-ai-processors/
Faraday and the “Newborn Baby” Quote
James, F., “Michael Faraday: A Very Short Introduction,” Oxford University Press, 2010.
Cantor, G., “Michael Faraday: Sandemanian and Scientist,” Macmillan, 1991.

Leave a Reply