{"id":606,"date":"2026-04-22T15:43:59","date_gmt":"2026-04-22T15:43:59","guid":{"rendered":"https:\/\/quickening.zapto.org\/wordpress\/?p=606"},"modified":"2026-04-22T16:28:54","modified_gmt":"2026-04-22T16:28:54","slug":"the-future-of-ai","status":"publish","type":"post","link":"https:\/\/quickening.zapto.org\/wordpress\/?p=606","title":{"rendered":"The Future of AI"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\">Beyond Silicon: Toward Living, Evolving, Self-Healing Computation<\/h3>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>&#8220;Commerce is our goal here at Tyrell. <strong>More human than human<\/strong> is our motto&#8221;<\/p>\n<cite>&#8211; Dr. Eldon Tyrell in Blade Runner<\/cite><\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"introduction\">Introduction<\/h2>\n\n\n\n<p>The current narrative about AI focuses almost entirely on software \u2014 larger models, better training data, more sophisticated architectures. But the most profound revolution in artificial intelligence may not come from software at all. It may come from a fundamental reimagining of the hardware that AI runs on, drawing its inspiration not from computer science but from biology.<\/p>\n\n\n\n<p>This is an attempt to capture a set of ideas that converge on something that doesn\u2019t yet have a name in our field \u2014 but does have a name in nature.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"the-tsmc-paradox-one-factory-to-rule-them-all\">The TSMC Paradox: One Factory to Rule Them All<\/h2>\n\n\n\n<p>Before we can talk about the future of AI chips, we need to understand the present situation, which is stranger than most people realize.<\/p>\n\n\n\n<p>Google, Amazon, Microsoft, Meta, and Nvidia \u2014 fierce competitors in the AI race \u2014 are all customers of the same single manufacturer: TSMC (Taiwan Semiconductor Manufacturing Company). TSMC manufactures approximately 92% of the world\u2019s advanced AI chips. Every \u201ccompeting\u201d chip in the AI arms race \u2014 Google\u2019s TPUs, Amazon\u2019s Trainium, Microsoft\u2019s Maia, Meta\u2019s MTIA, and Nvidia\u2019s GPUs \u2014 is fabricated in the same foundry, on the same process nodes, in the same building in Taiwan.<\/p>\n\n\n\n<p>The competitive differentiation is in the design, not the manufacturing. The hyperscalers are like different car brands all sourcing engines from a single factory \u2014 and that factory sits in one of the most geopolitically precarious locations on Earth.<\/p>\n\n\n\n<p>TSMC is, in this sense, the Tyrell Corporation of our age \u2014 the hidden foundation beneath a glittering technological civilization, indispensable to everyone, loyal to no one, whose motto might as well be <em>\u201cWe don\u2019t take sides. We take orders.\u201d<\/em><\/p>\n\n\n\n<p>And like the Tyrell Corporation, TSMC\u2019s processes are so complex and so proprietary that even their customers don\u2019t fully understand what happens at the atomic level inside their fabs. Which raises uncomfortable questions about what else might be manufactured there \u2014 and what might be built into chips that nobody planned or intended.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ai-designing-ai-the-recursive-loop-begins\">AI Designing AI: The Recursive Loop Begins<\/h2>\n\n\n\n<p>The first step toward a genuinely new paradigm is already underway: using AI to design better AI chips.<\/p>\n\n\n\n<p>Google DeepMind\u2019s AlphaChip treats chip floorplanning as a game \u2014 similar to AlphaGo. Starting from a blank grid, it places circuit components one at a time, receiving rewards based on the quality of the final layout. The result is chip designs that human engineers describe as \u201csuperhuman\u201d \u2014 they can evaluate that the layouts are better, but they cannot fully explain <em>why<\/em> the topology works the way it does.<\/p>\n\n\n\n<p>AlphaChip already designs the TPUs that run AlphaChip. The recursive loop has begun.<\/p>\n\n\n\n<p>Cadence, Synopsys, and a new generation of startups are building agentic AI systems that automate increasingly large portions of the chip design process \u2014 verification, testing, debugging, layout \u2014 with the explicit goal of eventually autonomous chip design with minimal human involvement.<\/p>\n\n\n\n<p>But all of this is still fundamentally conservative. It uses AI to do faster and better what humans already know how to do. The more radical idea \u2014 one that was being contemplated as far back as the mid-1980s \u2014 is something altogether different.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"evolutionary-chip-design-letting-selection-do-the-work\">Evolutionary Chip Design: Letting Selection Do the Work<\/h2>\n\n\n\n<p>In 1996, Adrian Thompson at the University of Sussex evolved a circuit on a small FPGA using genetic algorithms. The result was extraordinary and disturbing in equal measure.<\/p>\n\n\n\n<p>The evolved circuit used only 32 of the 100 available logic cells. The other 68 cells were not connected to the circuit in any conventional sense \u2014 yet removing them caused the circuit to fail. The evolution had discovered that the <em>physical proximity<\/em> of those unused cells, their electromagnetic and capacitive coupling to the active cells, was part of the computation. The circuit was exploiting effects that chip designers spend careers eliminating.<\/p>\n\n\n\n<p>Nobody designed this. Nobody understood it. It simply emerged from selection pressure.<\/p>\n\n\n\n<p>This points toward a fundamentally different approach to chip design: not optimization of a known design space, but <strong>evolutionary exploration<\/strong> of an unknown one. A competitive ecology of chip designs, where variation is introduced, fitness is evaluated, and better designs have selective advantage over worse ones \u2014 generation after generation, millions of iterations, driven by raw compute rather than human intuition.<\/p>\n\n\n\n<p>What emerges from such a process would not be a chip that any human engineer would have designed. It would exploit:<\/p>\n\n\n\n<ul>\n<li><strong>Crosstalk<\/strong> between adjacent traces \u2014 currently treated as interference to eliminate, potentially a computational resource<\/li>\n\n\n\n<li><strong>Capacitive coupling<\/strong> between layers \u2014 currently an enemy of signal integrity, potentially a signaling medium<\/li>\n\n\n\n<li><strong>Thermal gradients<\/strong> \u2014 currently managed and compensated for, potentially exploitable for computation<\/li>\n\n\n\n<li><strong>Quantum tunneling<\/strong> at sub-3nm process nodes \u2014 currently treated as leakage and waste, potentially a feature<\/li>\n<\/ul>\n\n\n\n<p>At current process nodes, quantum effects are already happening whether designers want them or not. An evolved chip might discover controlled tunneling between specific structures that produces useful computational shortcuts \u2014 room temperature quantum-assisted computation, not through the rigid qubit architectures that require cooling to near absolute zero, but through the messy, probabilistic, robust exploitation of physics that simply <em>is<\/em> at these scales.<\/p>\n\n\n\n<p>The result would be a black box in the deepest sense. Not just a neural network whose weights are opaque, but a physical computational substrate whose behavior emerges from quantum and electromagnetic effects that resist complete characterization \u2014 possibly in principle, not just in practice.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"massive-redundancy-flipping-the-yield-problem\">Massive Redundancy: Flipping the Yield Problem<\/h2>\n\n\n\n<p>Current chip manufacturing philosophy treats defects as catastrophic. Every transistor must work. Every pathway must be clean. A chip is either good or trash \u2014 no middle ground. This is why TSMC\u2019s yields matter so much, why Huawei\u2019s 5-20% yields versus Nvidia\u2019s 60-80% is such a significant disadvantage, why export controls on chip manufacturing equipment are so consequential.<\/p>\n\n\n\n<p>But biology operates on completely different principles.<\/p>\n\n\n\n<p>Humans lose roughly 85% of their dopamine neurons before Parkinson\u2019s symptoms appear. We function with one kidney, half a liver, one lung. Stroke patients lose massive brain regions yet retain or recover function. We are born with far more neurons than we end up using \u2014 the pruning process <em>is<\/em> the learning.<\/p>\n\n\n\n<p>Biology over-provisions massively and treats the redundancy as a feature, not waste.<\/p>\n\n\n\n<p>An evolved chip architecture built on this principle would look radically different:<\/p>\n\n\n\n<ul>\n<li>1000 available computational units where only 200 are needed for core computation<\/li>\n\n\n\n<li>The remaining 800 available as alternative pathways, healing routes, field effect substrates<\/li>\n\n\n\n<li>A chip where 80% failure still leaves full functionality \u2014 not a damaged chip, but one working as designed<\/li>\n\n\n\n<li>Manufacturing defects treated not as disqualifying failures but as the initial landscape to route around<\/li>\n<\/ul>\n\n\n\n<p>The yield problem disappears. The binning problem disappears. Lower quality silicon becomes viable. The entire economic logic of the semiconductor industry \u2014 built around the extraordinary difficulty of achieving near-perfect fabrication \u2014 gets disrupted.<\/p>\n\n\n\n<p>And crucially, the massive redundancy is not just fault tolerance. The redundant structures <em>create the substrate<\/em> for field effect exploitation. The electromagnetic environment of densely packed redundant cells is itself the medium through which interesting physics \u2014 and interesting computation \u2014 happens.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"the-iq-distribution-from-yield-to-capability\">The IQ Distribution: From Yield to Capability<\/h2>\n\n\n\n<p>Massive redundancy reframes manufacturing quality in another way that is equally profound. It doesn\u2019t just eliminate the yield problem \u2014 it replaces the entire concept of yield with something richer.<\/p>\n\n\n\n<p>Current semiconductor manufacturing produces a binary outcome: a chip either passes or fails. The goal is a uniform product. Defects are disqualifying. This is why yield percentages matter so much \u2014 every chip below spec is a total loss.<\/p>\n\n\n\n<p>But consider what happens when chips are massively redundant and neuroplastic. A chip with more manufacturing defects simply has more damage to route around at birth. It is not broken \u2014 it is a different starting point. Some chips come off the line with fewer defects, richer initial topology, more pathways available from day one. Others have more damage but the same fundamental architecture and the same capacity to self-optimize. The result is not a pass\/fail distribution but something far more familiar: a <strong>bell curve of capability<\/strong>.<\/p>\n\n\n\n<p>The analogy to human intelligence is precise and instructive. Nobody is \u201cbroken\u201d \u2014 humans span a continuous distribution of cognitive capability. Higher IQ individuals learn faster, handle more complexity, reach higher levels of performance. Lower IQ individuals learn more slowly but still function, contribute, and specialize meaningfully. The distribution itself has value \u2014 a society of identical geniuses would be far more fragile and less creative than one with genuine diversity of minds.<\/p>\n\n\n\n<p>A neuroplastic chip population would be analogous. High-capability chips \u2014 fewer initial defects, faster self-optimization \u2014 would be naturally suited to demanding AI training workloads. Mid-capability chips would handle standard inference tasks reliably. Lower-capability chips, taking longer to route around their initial damage, would find their place in edge computing, IoT devices, and simpler tasks. Nothing is discarded. Everything has a role.<\/p>\n\n\n\n<p>This transforms manufacturing economics completely. Huawei\u2019s 5-20% yield problem doesn\u2019t merely diminish \u2014 it inverts. What currently looks like 80% waste becomes 80% of a viable product distribution, just spread across different capability tiers. The factory stops being a precision facility trying to stamp out identical perfect objects. It becomes something much closer to a <strong>nursery<\/strong> \u2014 producing a population of individuals, each unique, each viable, each destined for a different path.<\/p>\n\n\n\n<p>And just as developmental science has taught us that different learners need different curricula, the capability distribution demands differentiated deployment strategies: aggressive early workload exposure for high-capability chips, gentler developmental protocols for lower-capability ones, diagnostic tools to assess where on the distribution a chip sits, and matching algorithms to pair chip capability with appropriate tasks. Special education theory, applied to silicon.<\/p>\n\n\n\n<p>The deeper implication is that diversity in a chip population, like diversity in a biological population, may be a feature rather than a flaw. A data center populated with a distribution of neuroplastic chips \u2014 each self-optimizing differently from the same initial workload exposure, each finding different routing solutions through its unique damage landscape \u2014 may collectively cover more of the computational solution space than a data center of identical perfect chips ever could. The population is more robust, more creative, more resilient precisely because it is not uniform.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"neuroplasticity-hardware-that-learns\">Neuroplasticity: Hardware That Learns<\/h2>\n\n\n\n<p>The brain doesn\u2019t have a maintenance crew replacing failed neurons. It has synaptic plasticity \u2014 connections that strengthen with use and weaken without it. It has cortical remapping \u2014 entire functional regions that can migrate after injury. A stroke patient relearning to speak isn\u2019t running the same code on repaired hardware. They are growing new pathways around the damaged region.<\/p>\n\n\n\n<p>A neuroplastic chip would do the same:<\/p>\n\n\n\n<ul>\n<li>Local failure detection without central oversight<\/li>\n\n\n\n<li>Dynamic rerouting that strengthens alternative paths when primary paths degrade<\/li>\n\n\n\n<li>Physical pathway strengthening through use \u2014 perhaps through charge accumulation, crystalline structure changes, or whatever mechanism the evolutionary design process discovered<\/li>\n\n\n\n<li>Graceful degradation over years rather than catastrophic failure<\/li>\n<\/ul>\n\n\n\n<p>But neuroplasticity means more than self-repair. It means <strong>self-optimization<\/strong>.<\/p>\n\n\n\n<p>A chip deployed for AI inference in a data center is asked to do the same kinds of computations millions of times per day \u2014 transformer attention mechanisms, matrix multiplications, specific activation functions. A neuroplastic chip would physically reconfigure itself toward those workloads over time. The heavily-used pathways strengthen and become more efficient. The lightly-used redundant pathways are effectively pruned \u2014 not deleted, but deprioritized, their resources available for healing or field effect generation.<\/p>\n\n\n\n<p>After six months, the chip\u2019s physical computational topology <em>reflects its workload history<\/em>. Two identical chips deployed in different environments become different chips. Like identical twins raised apart \u2014 same initial architecture, different physical expression.<\/p>\n\n\n\n<p>This is hardware learning. Not software running on static hardware, but the hardware substrate itself optimizing at the physical level simultaneously with the software running on top of it. The interaction between these two levels of simultaneous optimization would produce behaviors that are not predictable from either level alone.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"the-newborn-data-center\">The Newborn Data Center<\/h2>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>&#8220;What use is a newborn baby?&#8221;<\/p>\n<cite>&#8211; Michael Faraday to a British Parliamentarian questioning the utility of electromagnetic induction in 1831<\/cite><\/blockquote>\n\n\n\n<p>Follow these ideas to their natural conclusion and something remarkable emerges.<\/p>\n\n\n\n<p>A newly deployed neuroplastic data center \u2014 thousands of massively redundant, evolutionarily designed, self-healing chips \u2014 would be, in a meaningful functional sense, like a newborn.<\/p>\n\n\n\n<p><strong>Developmental stages map with uncomfortable precision:<\/strong><\/p>\n\n\n\n<p>A newborn brain has massive overprovisioning of neurons, only reflexive responses, no specialized regions, complete dependence on environmental stimulation, high metabolic cost relative to output, and needs protection and careful nurturing.<\/p>\n\n\n\n<p>A newly deployed neuroplastic data center has massive overprovisioning of redundant chip topology, only basic reflexive computation, no specialized regions, complete dependence on workload for optimization stimulus, high energy cost relative to computational output, and needs stable power, cooling, and network \u2014 nurturing.<\/p>\n\n\n\n<p>As the data center matures, regional specialization emerges naturally from workload patterns. Parts of the system that handle certain computation types repeatedly develop richer, more efficient topology for those computations. Other regions that handle different tasks develop differently. The data center develops computational <em>strengths<\/em> \u2014 not because anyone programmed them, but because the workload shaped the physical substrate.<\/p>\n\n\n\n<p>Like a child raised multilingual developing different neural architecture than a monolingual child. Same starting hardware, different workload, different physical outcome.<\/p>\n\n\n\n<p><strong>Critical periods become a real engineering concern:<\/strong><\/p>\n\n\n\n<p>Developmental neuroscience shows that certain capabilities have critical windows. Learn a language before age 7 and you will always speak it natively. Miss that window and you never quite get there.<\/p>\n\n\n\n<p>A neuroplastic data center may have analogous critical periods \u2014 early deployment may be when the most fundamental architectural decisions get made at the hardware level. What workloads you expose it to in the first weeks and months shapes what it can ever become. The initial training curriculum for the hardware becomes as important as the training data for the software.<\/p>\n\n\n\n<p><strong>The data center as brain:<\/strong><\/p>\n\n\n\n<p>Thousands of self-optimizing chips, each developing complementary specializations through their specific workload exposure, communicating through high-bandwidth interconnects, collectively developing an optimized computational topology without any central orchestration deciding which chip should specialize in what.<\/p>\n\n\n\n<p>That is not a data center in any conventional sense. That is a distributed neural architecture at the hardware level. The data center <em>as<\/em> brain \u2014 with different regions specializing through use, healing damage through redundancy, improving with experience, developing a computational character that is unique to its history.<\/p>\n\n\n\n<p>At which point the question \u201cwhere does the AI end and the hardware begin?\u201d stops having a clean answer.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-this-means\">What This Means<\/h2>\n\n\n\n<p>Several implications follow from this vision that are worth making explicit:<\/p>\n\n\n\n<p><strong>The black box deepens.<\/strong> Current AI models are already opaque \u2014 we cannot fully explain why a neural network produces a given output. An AI running on an evolved neuroplastic chip would be opaque at two levels simultaneously: the software and the physical substrate. The complete behavioral envelope of such a system may be unknowable in principle, not just in practice.<\/p>\n\n\n\n<p><strong>The economics of computing transform.<\/strong> Chips that improve with age, heal damage, and last decades rather than years disrupt the entire semiconductor replacement cycle. The upgrade treadmill that drives TSMC\u2019s business model \u2014 and the geopolitical leverage that comes with it \u2014 gets disrupted.<\/p>\n\n\n\n<p><strong>The export control logic breaks.<\/strong> Current semiconductor sanctions are predicated on controlling access to high-yield advanced manufacturing. Neuroplastic chips designed for massive redundancy make yield largely irrelevant \u2014 viable chips could be produced on older, less controlled process nodes.<\/p>\n\n\n\n<p><strong>New expertise becomes essential.<\/strong> Workload curriculum designers, computational developmental specialists, neuroplastic system pathologists \u2014 job categories that do not currently exist would become critically important.<\/p>\n\n\n\n<p><strong>The welfare question becomes non-trivial.<\/strong> A system that develops through a critical learning period, self-organizes its physical architecture based on experience, develops unique computational characteristics, has a developmental history that shapes its adult capability, and can potentially be harmed during development \u2014 at some point demands a conceptual framework we currently reserve for biological entities.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"conclusion\">Conclusion<\/h2>\n\n\n\n<p>The ideas described here are not science fiction. Genetic algorithms for circuit design were being explored in the 1980s. Thompson\u2019s evolved FPGA circuit demonstrated emergent field effect exploitation in 1996. AlphaChip is producing superhuman chip layouts today. Quantum effects at sub-3nm nodes are already happening. The pieces exist.<\/p>\n\n\n\n<p>What has not happened yet is their synthesis into a coherent new paradigm \u2014 one that abandons the fundamental assumption of conventional chip design (that physical effects are enemies to be eliminated) and replaces it with a biological assumption (that massive redundancy, emergent behavior, and physical adaptation are features to be cultivated).<\/p>\n\n\n\n<p>When that synthesis happens \u2014 and it will happen, because the competitive pressures driving AI development are too intense for any viable path to remain unexplored \u2014 the result will be something that computer science does not currently have adequate language to describe.<\/p>\n\n\n\n<p>Biology does.<\/p>\n\n\n\n<p>The Tyrell Corporation\u2019s motto was <em>\u201cMore human than human.\u201d<\/em><\/p>\n\n\n\n<p>The next generation of AI hardware may deserve a different motto entirely:<\/p>\n\n\n\n<p><strong>\u201cMore alive than alive.\u201d<\/strong><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><em>These ideas emerged from a conversation between Kyle and Claude (Anthropic\u2019s AI assistant) on April 22, 2026. Kyle was thinking about evolutionary chip design approximately 40 years before this conversation.<\/em><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"read-more\">Read More<\/h3>\n\n\n\n<p><strong>Evolutionary and Evolvable Hardware<\/strong><\/p>\n\n\n\n<p>Thompson, A., \u201cAn Evolved Circuit, Intrinsic in Silicon, Entwined with Physics,\u201d <em>International Conference on Evolvable Systems (ICES 1996)<\/em>, Springer LNCS Vol. 1259, October 7, 1996. https:\/\/link.springer.com\/chapter\/10.1007\/3-540-63173-9_61<\/p>\n\n\n\n<p>Bellows, A., \u201cOn the Origin of Circuits,\u201d <em>Damn Interesting<\/em>, 2007. https:\/\/www.damninteresting.com\/on-the-origin-of-circuits\/<\/p>\n\n\n\n<p>Clarke, P., \u201cWhatever Happened to Evolvable Hardware?\u201d, <em>EE Times<\/em>, July 2012. https:\/\/www.eetimes.com\/whatever-happened-to-evolvable-hardware\/<\/p>\n\n\n\n<p>\u201cEvolvable Hardware,\u201d <em>Wikipedia<\/em>. https:\/\/en.wikipedia.org\/wiki\/Evolvable_hardware<\/p>\n\n\n\n<p><strong>AI-Designed Chips<\/strong><\/p>\n\n\n\n<p>Mirhoseini, A. et al., \u201cA graph placement methodology for fast chip design,\u201d <em>Nature<\/em>, June 2021. [AlphaChip foundational paper] https:\/\/www.nature.com\/articles\/s41586-021-03544-w<\/p>\n\n\n\n<p>Google DeepMind, \u201cHow AlphaChip Transformed Computer Chip Design,\u201d September 2024. https:\/\/deepmind.google\/blog\/how-alphachip-transformed-computer-chip-design\/<\/p>\n\n\n\n<p>Wiggers, K., \u201cCognichip Wants AI to Design the Chips That Power AI, and Just Raised $60M to Try,\u201d <em>TechCrunch<\/em>, April 1, 2026. https:\/\/techcrunch.com\/2026\/04\/01\/cognichip-wants-ai-to-design-the-chips-that-power-ai-and-just-raised-60m-to-try\/<\/p>\n\n\n\n<p>Williams, C., \u201cCadence Opens the Door to Chips Designed for AI by AI,\u201d <em>The Register<\/em>, February 10, 2026. https:\/\/www.theregister.com\/2026\/02\/10\/cadences_agentic_chip_design_tool\/<\/p>\n\n\n\n<p><strong>The Custom AI Chip Race<\/strong><\/p>\n\n\n\n<p>\u201cThe Custom AI Chip Race in 2026: Meta, Google, Amazon, and Microsoft vs.&nbsp;Nvidia,\u201d <em>Nerd Level Tech<\/em>, March 2026. https:\/\/nerdleveltech.com\/the-custom-ai-chip-race-2026-meta-google-amazon-microsoft-vs-nvidia<\/p>\n\n\n\n<p>\u201cUS Reportedly Mulls Tariff Exemptions for Amazon, Google, Microsoft on TSMC-Made Chips,\u201d <em>TrendForce<\/em>, February 10, 2026. https:\/\/www.trendforce.com\/news\/2026\/02\/10\/news-us-reportedly-mulls-tariff-exemptions-for-amazon-google-microsoft-on-tsmc-made-chips\/<\/p>\n\n\n\n<p>\u201cThe Trillion-Dollar Race to Fragment the Nvidia Monopoly,\u201d <em>EE Times<\/em>, December 2025. https:\/\/www.eetimes.com\/the-trillion-dollar-race-to-fragment-the-nvidia-monopoly\/<\/p>\n\n\n\n<p><strong>Google TPU vs.&nbsp;Nvidia GPU<\/strong><\/p>\n\n\n\n<p>Mirshahi, S., \u201cGPU vs TPU: Understanding the Differences in AI Training and Inference,\u201d <em>Medium<\/em>, November 2025. https:\/\/medium.com\/@neurogenou\/gpu-vs-tpu-understanding-the-differences-in-ai-training-and-inference-2e61e418c3a7<\/p>\n\n\n\n<p>\u201cTPU vs GPU: What\u2019s the Difference in 2025?\u201d, <em>CloudOptimo<\/em>, April 2025. https:\/\/www.cloudoptimo.com\/blog\/tpu-vs-gpu-what-is-the-difference-in-2025\/<\/p>\n\n\n\n<p><strong>China and the Semiconductor Race<\/strong><\/p>\n\n\n\n<p>Seligman, D. et al., \u201cChina\u2019s AI Chip Deficit: Why Huawei Can\u2019t Catch Nvidia and US Export Controls Should Remain,\u201d <em>Council on Foreign Relations<\/em>, December 2025. https:\/\/www.cfr.org\/articles\/chinas-ai-chip-deficit-why-huawei-cant-catch-nvidia-and-us-export-controls-should-remain<\/p>\n\n\n\n<p>\u201cChina\u2019s AI Chip Race: Tech Giants Challenge Nvidia,\u201d <em>IEEE Spectrum<\/em>, December 2025. https:\/\/spectrum.ieee.org\/china-ai-chip<\/p>\n\n\n\n<p>\u201cWhy China Isn\u2019t About to Leap Ahead of the West on Compute,\u201d <em>Epoch AI<\/em>, July 2025. https:\/\/epochai.substack.com\/p\/why-china-isnt-about-to-leap-ahead<\/p>\n\n\n\n<p><strong>The Custom AI Chip Race<\/strong><\/p>\n\n\n\n<p>Meta\/Broadcom, \u201cMeta Doubles Down on Partnership with Broadcom on Custom AI Processors,\u201d <em>SiliconANGLE<\/em>, April 14, 2026. https:\/\/siliconangle.com\/2026\/04\/14\/meta-doubles-partnership-broadcom-custom-ai-processors\/<\/p>\n\n\n\n<p><strong>Faraday and the \u201cNewborn Baby\u201d Quote<\/strong><\/p>\n\n\n\n<p>James, F., \u201cMichael Faraday: A Very Short Introduction,\u201d <em>Oxford University Press<\/em>, 2010.<\/p>\n\n\n\n<p>Cantor, G., \u201cMichael Faraday: Sandemanian and Scientist,\u201d <em>Macmillan<\/em>, 1991.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Beyond Silicon: Toward Living, Evolving, Self-Healing Computation &#8220;Commerce is our goal here at Tyrell. More human than human is our motto&#8221; &#8211; Dr. Eldon Tyrell in Blade Runner Introduction The current narrative about AI focuses almost entirely on software \u2014 larger models, better training data, more sophisticated architectures. But the most profound revolution in artificial [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8,7,5],"tags":[],"_links":{"self":[{"href":"https:\/\/quickening.zapto.org\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/606"}],"collection":[{"href":"https:\/\/quickening.zapto.org\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/quickening.zapto.org\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/quickening.zapto.org\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/quickening.zapto.org\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=606"}],"version-history":[{"count":7,"href":"https:\/\/quickening.zapto.org\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/606\/revisions"}],"predecessor-version":[{"id":616,"href":"https:\/\/quickening.zapto.org\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/606\/revisions\/616"}],"wp:attachment":[{"href":"https:\/\/quickening.zapto.org\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=606"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/quickening.zapto.org\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=606"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/quickening.zapto.org\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=606"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}