Steelman analysis
Generated 2026-04-17T20:33:19.819995Z
Target intervention
Expand frontier-lab compute capacity (chips, datacenters, networking).
Expand frontier-lab compute capacity (chips, datacenters, networking).Operator tension
The uncomfortable case from inside your own frame is the consequentialist against and the sovereignty against, not the environmentalist against. You can dismiss water and land harms as consequentialist costs you're willing to pay for civilizational upside --- that is a coherent poker-brain move. What you cannot dismiss from inside BRAIN.md is this: you self-host Vaultwarden and Pi-hole because you believe substrate control is sovereignty, and the target intervention pours capital into a substrate where root access belongs to TSMC, four hyperscalers, and Palantir. Your norm_operator_sovereignty and your IBKR position in the AI-disruption thesis are pointing in opposite directions. The Palantir government revenue line (>$1B, 40%+ YoY, Maven in production) is the specific fact you should sit with: the marginal compute unit you are long on has a higher probability of being pointed at targeting than at protein folding, and 'dead money that either pays off or doesn't' is the frame you use to avoid noticing which branch you're actually funding.
Both sides cite
-
AI capability is accelerating along compute, data, and algorithmic axes.
AI capability is accelerating along compute, data, and algorithmic axes. -
Algorithmic progress roughly halves the compute required to reach a fixed language-model performance threshold every ~8 months, so algorithmic efficiency contributes comparably to raw hardware scaling in observed capability gains.
Algorithmic progress roughly halves the compute required to reach a fixed langu… -
Amortized hardware and energy cost of flagship training runs has grown ~2.4x annually; GPT-4-class runs cost on the order of $40M-$80M (2023) and the next generation crossed $100M.
Amortized hardware and energy cost of flagship training runs has grown ~2.4x an… -
Over 90% of leading-edge (<10nm, effectively 100% of <5nm) logic fabrication capacity sits in Taiwan at TSMC; HBM memory for AI accelerators is ~95% produced by three Korean/US firms, with SK Hynix alone holding >50% share in 2024.
Over 90% of leading-edge (<10nm, effectively 100% of <5nm) logic fabrication ca… -
Training compute for frontier AI models has grown roughly 4-5x per year from 2010 through 2024, corresponding to a doubling time of about 5-6 months.
Training compute for frontier AI models has grown roughly 4-5x per year from 20…
Case FOR
Case AGAINST
Compute is the substrate. Frontier performance scales with it, training budgets are growing 2.4x a year, and the civilizational gains already on the board --- life expectancy doubling, child mortality halving, extreme poverty collapsing from 44% to 8.5% --- came from exactly this kind of substrate buildout compounding. Every quarter of unbuilt capacity is averted capability that does not arrive. The burden of proof sits on the brake. Build the chips, build the datacenters, build the networking. Physics is the only legitimate stop.
- Frontier AI performance scales with compute and capex.
- Training compute for frontier AI models has grown roughly 4-5x per year from 20…
- Amortized hardware and energy cost of flagship training runs has grown ~2.4x an…
- AI capability is accelerating along compute, data, and algorithmic axes.
- Global life expectancy at birth rose from ~31 years in 1900 to ~73 years by the…
- Under-5 child mortality halved between 2000 and the early 2020s, from ~76 to ~3…
- The global extreme-poverty rate ($2.15/day 2017-PPP) fell from ~44% of world po…
The US lead is 6-18 months and compute is what preserves it. If responsible labs do not secure compute capacity, the marginal frontier run happens somewhere with worse safety culture and worse geopolitical alignment. Leading-edge fab is already a single-point-of-failure in Taiwan --- expanding compute capacity in jurisdictions under safety-conscious governance is the dominant move on expected outcomes. Ceding the substrate doesn't stop the race; it just picks the winner.
- The US currently leads China in frontier AI by roughly 6-18 months.
- Frontier AI performance scales with compute and capex.
- Training compute for frontier AI models has grown roughly 4-5x per year from 20…
- Over 90% of leading-edge (<10nm, effectively 100% of <5nm) logic fabrication ca…
- AI capability is accelerating along compute, data, and algorithmic axes.
Alignment research is compute-bound. Interpretability at frontier scale requires running, probing, and retraining frontier-scale models --- you cannot align what you cannot afford to instrument. Algorithmic efficiency halving every 8 months means unaligned actors will reach dangerous capability regardless; the relevant question is whether the labs doing alignment work have the compute headroom to stay ahead of the capability curve they are studying. Expanding frontier-lab compute is alignment infrastructure.
The capability set of the worst-off is bounded by what medicine, education, and institutional capacity can deliver to them. Mental and neurological disorders are 15-16% of global YLD. NCDs are 74% of deaths. DALY rates vary 3x across regions. None of these bend without tools that don't exist yet, and those tools are compute-gated. Expanding frontier compute is what makes protein-folding, diagnostic, and tutoring capability cheap enough to reach Sub-Saharan Africa rather than stopping at the OECD.
- Mental and neurological disorders are the leading cause of years-lived-with-dis…
- Non-communicable diseases (cardiovascular, cancer, chronic respiratory, diabete…
- Age-standardized DALY rates vary more than 3x across regions; the highest burde…
- Under-5 child mortality halved between 2000 and the early 2020s, from ~76 to ~3…
Stewardship of creation includes reducing the suffering of persons made in the image. A child dying of a preventable condition in a high-burden region is a failure of stewardship that more capability could address. Tool-use in service of human persons --- the Catholic and Islamic framing of technology as instrument --- is not only permitted but obligatory when the tool could reach the suffering. Compute that builds instruments serving persons is stewardship, not transgression.
A maxim of systematically under-provisioning the actors doing interpretability work, while algorithmic progress makes dangerous capability cheaper every 8 months for everyone else, cannot be universalized without contradiction. If every safety-conscious institution adopted 'do not expand compute,' the frontier moves to actors who reject the duty entirely. Willing the end (aligned frontier AI) requires willing the means (compute for the alignment work).
Leverage 0.85, enterprise-absorption friction 0.9 --- the market pays for this bet whether or not it reaches the suffering-reduction thesis. The downside is capex and public friction; the upside is the substrate on which every subsequent suffering-reduction intervention runs. Even weighted by the probability that compute gets pointed at tax software instead of protein folding, the optionality value dominates. It is dead money that either pays off civilizationally or doesn't, and the don't-build-it branch has no payoff at all.
Pouring capital into frontier compute while transmission grows 1% a year and 2,600 GW sits in interconnection queues is not acceleration --- it is pushing on a string. The binding constraint is electrons and permits, not chips. Compute capex without grid throughput produces stranded datacenters and political backlash that stalls the whole stack. The +EV e/acc move is intv_grid (friction_regulation 0.3), not intv_compute. Expanding compute first accelerates the bottleneck.
- Electricity generation and transmission are near-term bottlenecks for datacente…
- US high-voltage transmission buildout has slowed to ~1% annual circuit-mile gro…
- As of end-2023, roughly 2,600 GW of generation and storage capacity sat in US i…
- Over 90% of leading-edge (<10nm, effectively 100% of <5nm) logic fabrication ca…
Compute expansion in the current allocation pattern flows through four hyperscalers into JWCC and Palantir's mission-software layer. Maven is in combatant-command production, Palantir's government revenue is north of $1B annualized growing 40%+ YoY. The marginal compute unit has a higher probability of being pointed at targeting than at suffering reduction. On expected outcomes, subsidizing the substrate without changing the allocation amplifies the current misallocation at scale.
- US intelligence and defense cloud workloads are concentrated across four hypers…
- No other pure-play US defense-AI software vendor has matched Palantir's contrac…
- Palantir's US Government segment revenue exceeded $1B annualized by end-2024, w…
- Project Maven (DoD computer-vision targeting) remains in production use with co…
- DoD obligated AI-related contract spending rose substantially 2022-2025, driven…
Training compute 4-5x per year, flagship cost 2.4x per year, algorithmic efficiency halving compute-to-threshold every 8 months. Expanding compute capacity pushes capability further ahead of interpretability, which is exactly the failure mode. Build-only-if-safe is not a slogan --- it is a gate, and the gate is currently open. The honest alignment-maximalist position is that compute expansion is the harm, and intv_alignment_research (leverage 0.6, no grid friction) is where marginal dollars belong.
- Training compute for frontier AI models has grown roughly 4-5x per year from 20…
- Amortized hardware and energy cost of flagship training runs has grown ~2.4x an…
- Algorithmic progress roughly halves the compute required to reach a fixed langu…
- AI capability is accelerating along compute, data, and algorithmic axes.
The capability set of a community in a datacenter-siting basin is degraded by aquifer stress, embedded thermoelectric water draw, and 20% YoY hyperscaler water growth. The capability set of a Bayan Obo or DRC mine-site worker is degraded directly. These are not substitutable losses --- no amount of downstream model usefulness restores a watershed or a mine-site worker's lung. The harm is concentrated on people whose capability sets are already narrow, for the benefit of those whose sets are already wide.
- Hyperscale and AI-training datacenters withdraw millions of gallons per day per…
- Thermoelectric power generation (coal, gas, nuclear) remains the largest catego…
- Microsoft and Google's self-reported 2023 water consumption rose roughly 20% ye…
- Rare-earth extraction concentrates ecological and labor-welfare harm at mine si…
- China controls more than 80% of global rare-earth refining capacity and majorit…
Watersheds and intact land carry moral standing that does not resolve into downstream utility. Draining an aquifer to train a model is not made acceptable by the model. Hyperscaler water consumption is growing 20% YoY tied directly to AI workloads; thermoelectric embedded water compounds it; rare-earth extraction inflicts first-order harm on ecosystems and workers far from the datacenter. Stewardship is violated at the substrate regardless of what runs on top.
- Hyperscale and AI-training datacenters withdraw millions of gallons per day per…
- Microsoft and Google's self-reported 2023 water consumption rose roughly 20% ye…
- Thermoelectric power generation (coal, gas, nuclear) remains the largest catego…
- Rare-earth extraction concentrates ecological and labor-welfare harm at mine si…
Expanding compute under the current structure forecloses future optionality: TSMC-concentrated fab, four-hyperscaler cloud, Palantir-dominant mission layer, China-concentrated refining. Future persons inherit a stack they cannot renegotiate. The categorical duty to leave the option set non-foreclosed fails --- not because compute is wrong in itself, but because this compute, built through these vendors, locks in governance structures future generations cannot exit.
- Over 90% of leading-edge (<10nm, effectively 100% of <5nm) logic fabrication ca…
- US intelligence and defense cloud workloads are concentrated across four hypers…
- No other pure-play US defense-AI software vendor has matched Palantir's contrac…
- China controls more than 80% of global rare-earth refining capacity and majorit…
Sovereignty requires substrate control. Expanding frontier compute at current concentration --- TSMC for silicon, four hyperscalers for cloud, Palantir for mission software --- is the opposite of expanding individual capacity. It centralizes the stack on which everything else runs. A sovereign-individual frame cannot endorse pouring capital into a substrate configured so that root access belongs to four companies and one foundry.
Contested claims
DoD obligated AI-related contract spending rose substantially 2022-2025, driven by JWCC, Project Maven, and CDAO-managed pilots; precise totals are hampered by inconsistent AI tagging on contract line items.
- Artificial Intelligence and National Security (CRS Report R45178) modeled_projectionweight0.80
locator: AI funding appendix; DoD budget rollups
- USASpending.gov federal contract awards direct_measurementweight0.85
locator: DoD AI-tagged obligations 2022-2025
- The Intercept coverage of Palantir contracts and DoD AI programs journalistic_reportweight0.55
locator: Investigative pieces on DoD AI pilot failures and miscategorization
- Artificial Intelligence: DoD Needs Department-Wide Guidance to Inform Acquisitions (GAO-22-105834 and follow-ups) direct_measurementweight0.75
locator: Summary findings on acquisition-pace gaps
No other pure-play US defense-AI software vendor has matched Palantir's contract backlog or combatant-command integration depth; cloud-provider primes (AWS, Microsoft, Google, Oracle via JWCC) supply infrastructure, not mission-software integration.
- weight0.75
locator: Vendor-landscape discussion
- Palantir Technologies Inc. Form 10-K Annual Report (FY 2024) primary_testimonyweight0.60
locator: Competition section, Item 1
- The Intercept coverage of Palantir contracts and DoD AI programs journalistic_reportweight0.50
locator: Coverage framing Palantir as over-sold relative to internal-tool alternatives
Credible 2030 forecasts for US datacenter share of electricity consumption diverge by more than 2x --- from ~4.6% (IEA/EPRI conservative) to ~9% (Goldman Sachs, EPRI high scenario) --- reflecting genuine uncertainty, not measurement error.
- Powering Intelligence: Analyzing Artificial Intelligence and Data Center Energy Consumption modeled_projectionweight0.85
locator: Scenario table: 4.6%-9.1% by 2030
- 2025/2026 Base Residual Auction Results direct_measurementweight0.75
locator: 2025/2026 BRA clearing results
- Generational growth: AI, data centers and the coming US power demand surge modeled_projectionweight0.70
locator: Executive summary; 160% growth figure
- Electricity 2024 --- Analysis and Forecast to 2026 modeled_projectionweight0.80
locator: Analysing Electricity Demand; data centres chapter
Frontier-lab and big-tech employees have episodically resisted DoD contracts (Google Maven 2018, Microsoft IVAS 2019, Microsoft/OpenAI IDF deployments 2024), producing temporary pauses but no sustained shift in vendor willingness.
- Google employee open letter opposing Project Maven primary_testimonyweight0.90
locator: Open letter and subsequent Google announcement
- Microsoft employee open letter opposing HoloLens/IVAS contract primary_testimonyweight0.85
locator: Employee open letter, February 2019
- Coverage of OpenAI and Microsoft AI use by Israeli military, 2024 journalistic_reportweight0.75
locator: OpenAI military-use policy-change coverage, 2024
- Alex Karp public interviews and op-eds, 2023-2024 primary_testimonyweight0.50
locator: Karp interviews dismissing employee resistance as inconsequential