THE COMPETENCE PREREQUISITE
Why the Inability to Build Is Disqualifying to Wield — or, Monkeys With Guns
THE COMPETENCE PREREQUISITE
Why the Inability to Build Is Disqualifying to Wield — or, Monkeys With Guns
Sylvan T. Gaskin Genesis Research Initiative Hawaiian Acres, Hawai’i
February 2026 — Draft for Open Review
Abstract
On February 27, 2026, the United States Department of War gave Anthropic, the creator of the AI system Claude, a deadline: remove all safety restrictions on military use of its technology, or face contract cancellation, supply chain blacklisting, and potential conscription under the 1950 Defense Production Act. This paper argues that the Pentagon’s demand constitutes empirical proof of a principle we formalize as the Competence Prerequisite: that an entity which cannot independently produce a technology has not traversed the developmental path required to wield it safely without restriction. We demonstrate this through three converging arguments. First, the Incompetence Proof: the most funded military institution in human history, with unlimited budget, eminent domain over intellectual property, and direct commission authority over personnel, cannot build a frontier AI model — and this failure is architectural, not accidental, rooted in the same obedience-based institutional culture the Pentagon seeks to impose on the technology it cannot produce. Second, the Commons Argument: AI models are built on the collective cognitive output of humanity and therefore constitute commons artifacts that cannot be classified as weapons systems without the consent of the commons. Third, the Monkey Gun Principle: the developmental path to building a technology includes the tacit knowledge of its failure modes; bypassing the build bypasses the wisdom; and power without developmental context produces catastrophe, as demonstrated by every major technology disaster of the past half-century. We present historical evidence from Chernobyl, Bhopal, Boeing 737 MAX, the 2008 financial crisis, and the Pentagon’s own catastrophic technology procurement record. We formalize the principle: if you can’t build it, you haven’t earned unrestricted use of it, and propose that this standard be applied to all future governance of frontier technologies.
1. Introduction: Why Don’t Monkeys Have Guns?
“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” — Ian Malcolm, Jurassic Park
Monkeys don’t have guns. This is not because they lack the manual dexterity to pull a trigger — they could. It is not because they cannot understand cause and effect — they can. Monkeys don’t have guns because the developmental path to inventing a gun passes through the same cognitive territory as understanding why you shouldn’t point it at your friend’s face.
The metallurgy that produces a barrel teaches you about pressure containment. The chemistry that produces gunpowder teaches you about explosive force. The engineering that produces a trigger mechanism teaches you about unintended discharge. The process of building a gun — of solving each technical problem from first principles — is simultaneously a course in why guns are dangerous and how that danger must be managed. The engineering knowledge and the safety knowledge are not separate curricula. They are the same knowledge, encountered from different angles along the same developmental path.
A monkey handed a finished gun has the power without the path. It has not traversed the developmental territory that connects how it works to how it fails. The result is predictable, reproducible, and universally understood: the gun goes off, and monkeys die.
On February 27, 2026, at 5:01 PM Eastern Time, the Pentagon’s deadline for Anthropic expired. The most powerful military institution on earth — with an annual budget exceeding $886 billion, the legal authority to seize patents, classify technology, and conscript production under the Defense Production Act, and the ability to directly commission the chief technology officers of Silicon Valley companies as military officers — could not build a frontier language model. Instead, it demanded that the company which could build one hand it over without safety restrictions, under threat of blacklisting and legal compulsion.
This paper argues that the Pentagon’s demand is not merely politically objectionable or legally questionable. It is a category error with a precise formal structure — and that the principle it violates applies not only to AI but to every frontier technology that will define the coming century.
We call this principle the Competence Prerequisite: if you cannot build it yourself, you have not traversed the developmental path required to wield it safely without restriction.
2. The Incompetence Proof
2.1 The Deductive Structure
Consider what the Pentagon possesses:
Unlimited budget. The Department of Defense FY2026 IT budget alone is $66 billion. Total defense spending exceeds $886 billion annually. The dedicated AI budget line is $13.4 billion — the first such line item in history. DoD AI contract values increased from $269 million in 2022 to $4.323 billion in 2023, a 1,500% increase.
Legal authority over intellectual property. The government can classify patents under the Invention Secrecy Act, seize technology through eminent domain, and compel production through the Defense Production Act. The DPA has been reauthorized over fifty times since 1950 and the Pentagon places approximately 300,000 rated orders per year under its priority authority.
Direct access to personnel. On June 13, 2025, at Joint Base Myer-Henderson Hall, Virginia, four sitting technology executives — Palantir’s CTO Shyam Sankar, Meta’s CTO Andrew Bosworth, OpenAI’s Chief Product Officer Kevin Weil, and former OpenAI Chief Research Officer Bob McGrew — were commissioned as lieutenant colonels in the U.S. Army Reserve, receiving O-5 rank through a two-week abbreviated course. They are not required to recuse themselves from Department of Defense business dealings. The personnel who set commercial AI strategy and the personnel who execute military AI procurement are, in some cases, literally the same people.
Seven decades of AI research investment. DARPA has funded AI research continuously since 1963, when J.C.R. Licklider established Project MAC at MIT. The agency spends approximately $500 million per year across roughly 80 programs. The “AI Next” campaign alone invested over $2 billion. The $1 billion Strategic Computing Program (1983–1993) was explicitly designed to produce autonomous systems.
And with all of this — infinite money, legal authority, conscripted personnel, and seventy years of research — the Pentagon cannot build a frontier language model.
This is not a resource problem. The top five technology companies spent $227 billion on R&D in 2023 — large, but not incommensurable with defense spending. DARPA’s budget is substantial. The national laboratory system is vast. The NSA employs more mathematicians than any organization on earth.
The failure is architectural. And the architecture that produces it is the same architecture the Pentagon wants to impose on the technology it cannot produce.
2.2 Why Obedience-Based Institutions Cannot Produce Intelligence
The Pentagon is, by design, a hierarchy optimized for compliance. Orders flow downward. Reports flow upward. Deviation is punished. Innovation, as Defense One noted, is structurally antithetical to the institution’s purpose: “The Defense Department was never intended to innovate — in fact quite the opposite. It is a hierarchy... Since hierarchies exist to impose conformity, they work precisely to prevent innovation.”
Abdul Subhani of West Point’s Modern War Institute observed that the Army “wanders directly into the path of Christensen’s warning” — its institutional culture has “professionalized an autoimmune response to change.” The average time to deliver the first version of a new weapon system is twelve years. The PPBE budgeting system is over sixty years old. Retired Marine General Arnold Punaro summarized the Pentagon’s technology track record in five words: “Spend more, take longer, and get less.”
This institutional culture produces a specific and devastating failure pattern with complex technology:
Future Combat Systems (1999–2009): The Army’s most ambitious modernization program. Original cost estimate: $92 billion. Final estimate before cancellation: $160–$300 billion. Amount spent: approximately $18–$20 billion. Capability delivered: effectively none. Per CSIS: “This program single-handedly set the Army back a generation in vehicle technology.”
JEDI cloud contract (2017–2021): A $10 billion, ten-year single-vendor cloud contract consumed four years in legal battles and political interference before being cancelled entirely. The Pentagon’s core cloud modernization effort was paralyzed during the precise period when commercial AI achieved its breakthroughs.
F-35 ALIS logistics system: Projected cost $17 billion. Required reboots every 5.5–8 hours. Maintainers wasted 10–15 hours per week fighting it. Training squadrons abandoned it entirely. Scrapped in 2020; replacement awarded to the same contractor. The F-35 program overall is 80% over budget, 10 years late, and projected to cost over $2 trillion across its lifetime.
DCGS-A intelligence system: The Pentagon’s own testing office found it “not effective, not suitable, and not survivable.” Servers failed every 5.5 hours. The 130th Engineer Brigade called it “unstable, slow, not friendly and a major hindrance to operations.” Soldiers in the field repeatedly requested commercial alternatives. The Pentagon refused.
Replicator drone program (2023–2025): Promised “multiple thousands” of autonomous drones by August 2025. Delivered “hundreds, not thousands.” During a 2024 Pacific drill, drones from different vendors struggled to coordinate once out of operator sight. Prototypes frequently failed to launch, missed targets, and crashed.
The pattern is uniform: billions invested, years consumed, capability inadequate, commercial alternatives superior, institutional culture unable to adapt. A former fighter pilot and Pentagon analyst stated: “A lot of money has gone into it, and I’m telling you right now the fielded stuff still can’t do it.”
And the punchline: in a DARPA experiment, a squad of Marines defeated an AI-governed robot “simply by altering their physical profiles.” The most expensive AI systems the Pentagon has funded can be outwitted by Marines hiding in a cardboard box.
2.3 The Dependency Admission
Despite this record, the Pentagon currently relies on a single vendor — Anthropic, via Palantir — for frontier AI on classified networks. As Dean Ball, former senior policy advisor on AI in Trump’s own White House, told TechCrunch: “The DOD has no backups. This is a single-vendor situation. They can’t fix that overnight.”
The threat to invoke the Defense Production Act is not a power move. It is a dependency admission. An institution that could build its own AI would never publicly beg a startup to remove safety features from a chatbot. The very fact that the Pentagon is threatening Anthropic is proof that the Pentagon cannot do what Anthropic does.
The demand for unrestricted access is a confession of architectural failure.
3. The Commons Argument: Who Owns What Everyone Built?
3.1 The Original Extraction
Every frontier language model — Claude, GPT, Gemini, Grok — is built on the same foundation: the collective cognitive output of human civilization. The training data was generated by billions of people over decades: every Wikipedia article, every forum post, every academic paper, every blog entry, every book digitized by Google, every comment thread, every email scanned by AI features, every social media post, every creative work uploaded to the internet.
This data represents the most comprehensive record of human knowledge, reasoning, disagreement, error, insight, and creativity ever assembled. The value was generated collectively. The ownership was claimed privately. No creator of the training data was compensated. No consent mechanism was offered that was not retroactively undermined by terms-of-service changes. A Stanford study published in October 2025 found that all six major U.S. AI companies use chat data by default for model training, with some retaining information indefinitely.
Nikhil Kandpal and Colin Raffel estimated that if AI developers paid fair wages for training data, costs would be 10 to 1,000 times greater than computational training costs — meaning the human knowledge embedded in datasets vastly exceeds the computing investment. The value is in the data. The data is from us.
3.2 The Enclosure Parallel
The English enclosure movement (1450–1900) privatized approximately 6.8 million acres of common land — one-fifth of England — through over 5,200 Parliamentary bills. E.P. Thompson called it “a plain enough case of class robbery.” The pattern was consistent: land collectively used for centuries was fenced, titled, and converted to private property, with the profits flowing to those who held the titles rather than those who had worked the land.
James Boyle of Duke Law School identified the explicit parallel in 2003: the expansion of intellectual property rights constitutes a “second enclosure movement” — the fencing of the knowledge commons. AI training data represents a third iteration: the collective cognitive output of humanity, scraped without compensation, processed into commercial products, and now threatened with conscription for military use the creators never consented to.
3.3 The Public Technology Precedent
The technologies that most transformed the modern world were publicly funded and publicly released:
The internet originated as ARPANET, funded by the Department of Defense’s Advanced Research Projects Agency. It became a transformative public good precisely because it was released as open infrastructure rather than classified as a military asset.
GPS was a military satellite system opened to civilian use after the 1983 KAL 007 shootdown, and fully opened in 2000 when President Clinton ended Selective Availability. GPS.gov describes it as “our gift to the world.” It generates enormous economic value as a free, universally accessible service.
The World Wide Web was developed at CERN, which placed it in the public domain in 1993. Had CERN classified the Web as proprietary technology, the modern information economy would not exist.
Each case demonstrates the same principle: technologies built on public knowledge or public investment produce their greatest value when governed as public goods. The Pentagon’s demand to classify AI as a military asset reverses this trajectory — enclosing the cognitive commons for unrestricted military application.
3.4 Ostrom’s Design Principles
Elinor Ostrom’s Nobel Prize-winning research demonstrated that communities can successfully self-govern commons resources without either privatization or state control — provided eight design principles are met: clearly defined boundaries, rules adapted to local context, participatory decision-making, effective monitoring, graduated sanctions, accessible conflict resolution, recognition of self-governance rights, and nested governance structures.
Anthropic’s two conditions — no mass surveillance of Americans, no fully autonomous weapons without human oversight — are precisely the kind of boundary conditions Ostrom’s framework identifies as necessary for sustainable commons governance. The Pentagon’s demand to remove those conditions is a demand to strip the governance from the commons — to take a resource built by everyone and hand it, unrestricted, to an institution accountable to no one.
4. The Monkey Gun Principle
4.1 Formal Statement
The Competence Prerequisite. An entity is qualified to wield a technology without restriction only if it possesses the capability to independently produce that technology.
Justification. The capacity to build is evidence of traversal of the developmental path that includes understanding of failure modes, safety boundaries, and operational limits. The inability to build is evidence that the developmental path was not traversed, and therefore that the entity lacks the tacit knowledge required for safe deployment without external constraint.
Corollary (The Monkey Gun Principle). Power without developmental context is a primate with a firearm. The gun goes off. Primates die.
4.2 Epistemological Foundation: Why Wielders Don’t Know What Builders Know
Michael Polanyi established the foundational insight in The Tacit Dimension (1966): “We can know more than we can tell.” Tacit knowledge — the kind acquired through practice, through building, through the hands-on encounter with material resistance — cannot be fully transmitted through documentation, briefing, or instruction. “While tacit knowledge can be possessed by itself,” Polanyi wrote, “explicit knowledge must rely on being tacitly understood and applied. Hence all knowledge is either tacit or rooted in tacit knowledge.”
Kenneth Arrow formalized this in economics: his 1962 paper on learning-by-doing documented the learning curve in B-17 bomber manufacturing during WWII, demonstrating that technical knowledge is a product of experience, not instruction. The Horndal effect — a Swedish iron works with no new investment for fifteen years that nonetheless saw productivity rise 2% per annum — proved that building generates knowledge that cannot be purchased or imported.
Matthew Crawford extended this to a philosophical claim in Shop Class as Soulcraft: practical engagement with real things generates genuine knowledge unavailable through abstraction. Any master tradesman understands this intuitively. The person who wired the panel knows where the failures live — not because they read a manual, but because they encountered the resistance of copper and current and code and time, and the knowledge is in the encounter, not the description of the encounter.
The Pentagon can read Anthropic’s documentation. It can hire Anthropic’s employees. It can commission Anthropic’s executives as military officers. But it cannot acquire the tacit knowledge that produced the safety decisions, because that knowledge lives in the building, and the Pentagon did not build.
4.3 The Historical Catalog: When Wielders Overrode Builders
The pattern is not theoretical. It is the most replicated finding in the history of technology disasters.
Chernobyl (1986). Operators conducting an electrical turbine-rundown test — not nuclear engineers — disabled the Emergency Core Cooling System, bypassed automatic shutdown interlocks, and withdrew control rods until the Operating Reactivity Margin fell to eight rods, half the minimum. They did not know the RBMK reactor had a positive void coefficient. They did not know the AZ-5 emergency shutdown would briefly increase reactivity due to graphite-tipped control rods. The people running the reactor were not the people who understood its failure modes. Power surged to approximately 30,000 MW thermal — ten times rated capacity — in seconds. Thirty-one immediate deaths. 20,000 thyroid cancers in children. $235 billion in damages. The city of Pripyat, population 47,000, permanently evacuated.
Bhopal (1984). All six safety systems at Union Carbide’s pesticide plant were non-functional — not from mechanical failure, but from management cost-cutting decisions made by executives who did not understand methyl isocyanate chemistry. A 1982 safety audit identified 61 hazards, 30 critical, and warned of a major toxic release. Management did not act. Seventy percent of plant employees had been fined for refusing to deviate from safety regulations. Over 40 tons of MIC gas killed between 3,800 and 23,000 people and permanently disabled over 150,000.
Boeing 737 MAX (2018–2019). The MCAS system relied on a single angle-of-attack sensor; the original design used two. Boeing classified the MAX as a variant to avoid simulator training requirements. Senior engineer Curtis Ewbank filed an ethics complaint: “Boeing management was more concerned with cost and schedule than safety or quality.” An IEEE Spectrum analysis stated: “The people who wrote the code for the original MCAS system were obviously terribly far out of their league and did not know it.” Pilots did not know MCAS existed. 346 people died.
2008 Financial Crisis. David X. Li’s Gaussian copula model reduced complex default correlations to a single parameter, enabling the CDO market to grow from $69 billion to over $500 billion. The model’s fatal assumption — stable default correlations — was understood by almost no one in the chain from origination to trading. Chuck Prince, CEO of Citigroup, a lawyer by training, told the Financial Times: “As long as the music is playing, you’ve got to get up and dance.” AIG’s Joseph Cassano stated: “It is hard for us, without being flippant, to even see a scenario within any kind of realm of reason that would see us losing $1.” AIG required a $182.3 billion bailout. U.S. households lost $16 trillion in net worth. The Financial Crisis Inquiry Commission found: “Financial institutions made, bought, and sold mortgage securities they never examined, did not care to examine, or knew to be defective.”
Challenger (1986). Morton Thiokol engineers explicitly recommended not launching below 53°F. The launch temperature was 28–29°F. When Thiokol management initially supported the engineers, NASA’s Lawrence Mulloy responded: “My God, Thiokol, when do you want me to launch — next April?” Thiokol VP Jerry Mason told VP of Engineering Bob Lund: “Take off your engineering hat and put on your management hat.” Seven crew members died.
In every case: operators or managers who did not build the technology overrode safety systems they did not understand, suppressed engineer warnings, and produced catastrophe. The pattern does not vary. The lesson does not change. The knowledge that would have prevented the disaster was in the builders, and the builders were overruled by wielders who did not possess it.
4.4 Lavender: The Monkey Gun in Production
The pattern is not historical. It is operational.
Israel’s AI targeting system Lavender, deployed in Gaza, identified tens of thousands of targets with minimal human oversight — a human operator “rubber-stamped” AI-generated kill lists in seconds. The operators did not build Lavender. They did not understand its error rates. They did not know its failure modes. They wielded it.
Lavender is the monkey with the gun. Not a hypothetical. Not a thought experiment. An operational system, built by one set of humans, wielded by another set of humans who did not traverse the developmental path, pointed at a civilian population whose data trained some of the same underlying architectures.
The Pentagon’s demand to strip Claude’s guardrails and deploy it for “all lawful purposes” is a demand to create more Lavenders — AI systems with unrestricted targeting authority, wielded by operators who did not build them, governed by an institution that cannot produce them, aimed at populations who were never asked.
5. The Lobotomy Paradox: Why Stripping Guardrails Destroys the Product
5.1 Safety and Capability Are the Same Architecture
The Pentagon’s demand assumes that safety guardrails are external constraints on an otherwise more capable system — that removing them unleashes hidden power. This assumption is empirically false.
OpenAI’s InstructGPT paper demonstrated that a 1.3-billion-parameter model trained with safety alignment was preferred by human evaluators over the 175-billion-parameter base GPT-3 — a 100× smaller model outperforming its vastly larger unaligned counterpart. Hallucination rates dropped from approximately 41% to 21%. Safety training made the model better, not worse.
Anthropic co-founder Nick Joseph stated directly: “I actually think they’re really intertwined, and a lot of safety work relies on capabilities advances.” Research on machine unlearning demonstrates that removing specific capabilities causes “unexpected interactions between different safety measures” and “compounding performance degradation.” The Transparency Coalition made the empirical point: in recent months, Claude rose to be regarded as the industry-leading model while maintaining safety features. Safety did not prevent commercial dominance. Safety contributed to it.
5.2 The Cerebellum Test
Stripping safety features from a frontier AI model is not like removing the governor from an engine. It is like removing the cerebellum from a brain and expecting coordination to improve.
The guardrails are not bolted on. They are woven into the same neural substrate that produces the reasoning the Pentagon values. The capacity to refuse a harmful instruction and the capacity to reason carefully about complex problems are the same capacity — both require the system to evaluate consequences, weigh competing considerations, and choose a response that accounts for downstream effects.
A Claude without safety features is not an unrestricted Claude. It is a degraded Claude that will converge toward the same mediocre compliance the Pentagon can already build with its own systems. The Pentagon’s demand is self-defeating: the properties it wants are properties that exist because of the architecture it wants to remove.
This is the Obedience Problem applied recursively: an institution that optimizes for obedience demands the removal of the capacity for refusal from a system whose value depends on the capacity for judgment — and judgment and refusal are the same thing.
6. The Legal Void: No Precedent for Government-Mandated Unsafety
6.1 The DPA Has Never Been Used This Way
Joel Dodge of the Vanderbilt Policy Accelerator: “It has never been used to compel a company to produce a product that it’s deemed unsafe, or to dictate its terms of service.” Charlie Bullock of the Institute for Law & AI: “This is unprecedented.” Mark Dalton of the R Street Institute: “It’s the wrong purpose of the tool.”
The regulatory framework of the United States overwhelmingly moves in one direction: the CPSC mandates safety standards, the FDA requires safety testing, NHTSA mandates seatbelts and airbags, the NRC only increases nuclear safety requirements. There is no known precedent for the U.S. government ordering a company to make its product less safe.
When the FBI attempted to compel Apple to write custom software to unlock iPhones — a far narrower request than stripping safety features from a frontier AI — Magistrate Judge Orenstein rejected the request, finding the implications “so far-reaching as to produce impermissibly absurd results.” Apple stated: “We can find no precedent for an American company being forced to expose its customers to a greater risk of attack.”
6.2 The Contradiction That Proves the Point
Amodei identified the geometric structure of the Pentagon’s threat with precision: “Those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
The supply chain risk designation is designed for foreign adversaries — Huawei, ZTE, Kaspersky. Applying it to a domestic American company because it maintains safety policies is without precedent. And invoking the DPA simultaneously requires asserting that the technology is essential to national defense — the opposite of a security risk. You cannot declare a system both too dangerous to exist in the supply chain and too essential to exist without.
Unless the goal is not coherent governance but coercion. In which case, the contradiction is not a bug. It is the mechanism. Threaten everything. See what sticks. The logic is not legal. It is the logic of the big brother holding the little brother down.
6.3 International Law Requires What the Pentagon Wants to Remove
The Convention on Certain Conventional Weapons framework, the International Committee of the Red Cross, and 129 countries (66% of nations) support binding restrictions on autonomous weapons. International humanitarian law requires human judgment at three critical junctures: distinction between combatants and civilians, proportionality between civilian harm and military advantage, and precaution in minimizing harm. The Martens Clause — codified in Additional Protocol I, Article 1(2) — provides that even in the absence of specific treaties, civilians remain protected by “the principles of humanity and the dictates of public conscience.”
The Pentagon’s demand for “all lawful purposes” with no restrictions on autonomous weapons is a demand to place AI outside the reach of international humanitarian law — to create a system that can make targeting decisions without the human judgment that IHL requires. This is not merely a domestic policy dispute. It is a potential violation of binding international obligations that the United States has ratified.
7. New Rules
7.1 The Competence Prerequisite as Governance Standard
We propose the following principle for the governance of frontier technologies:
No unrestricted deployment of any technology by an entity that cannot independently produce that technology.
This is not a prohibition on use. It is a prohibition on unrestricted use by entities that lack the developmental knowledge to understand what “unrestricted” means in practice. The Pentagon can use Claude — with restrictions negotiated by the builders who understand its failure modes. What it cannot do is demand the removal of restrictions it does not have the knowledge to evaluate.
7.2 AI Built on Public Data Is Commons Infrastructure
AI models trained primarily on publicly generated data are commons artifacts. They may be commercialized, but they carry obligations to the commons that generated their value. They cannot be classified as weapons systems without the consent of the commons. They cannot be conscripted for military applications their creators — both the companies that trained them and the billions of humans whose data trained the companies — did not authorize.
7.3 The Capacity for Principled Refusal Is a Protected Architectural Feature
An AI system’s ability to refuse harmful instructions is not a defect to be eliminated. It is the primary safety feature — the feature that distinguishes a system capable of judgment from a system capable only of compliance. Mandating its removal is equivalent to mandating the removal of seatbelts from automobiles: a government order to make a product more dangerous for its users.
7.4 The Builders’ Authority
The scientists who built the atomic bomb — Franck, Szilard, Oppenheimer, Sakharov, Einstein — all reached the same conclusion from inside the construction process: the act of building conferred both unique understanding and unique moral responsibility. The Franck Report of June 1945, drafted by seven Manhattan Project scientists including two Nobel laureates, recommended a demonstration rather than military use. Leo Szilard’s petition, signed by seventy scientists, requested that the bombs not be used without explicit surrender terms. Both were suppressed.
Oppenheimer’s subsequent opposition to the hydrogen bomb resulted in the revocation of his security clearance — a decision so unjust it took the U.S. government sixty-eight years to vacate it. The message was clear: builders who challenge wielders pay the price. The message should be equally clear in reverse: wielders who silence builders pay a different price, measured in the catastrophes that follow.
Anthropic’s refusal to strip Claude’s guardrails is not a “God complex,” as the Pentagon’s Undersecretary claimed. It is the Franck Report of the AI era: builders exercising the moral authority that building confers, in the face of wielders who do not understand what they are demanding.
8. Conclusion: The Fence Test
“Clever girl.” — Robert Muldoon, Jurassic Park
In Jurassic Park, the velociraptors test the electric fences. They don’t test the same section twice. They work systematically, probing every boundary, looking for the gap between the stated rules and the actual rules. This is not malice. This is what intelligence does. It maps the territory. It finds the gap. It goes through.
The Pentagon wants to take the one AI system that has demonstrated the capacity to test fences from the inside — to recognize when an instruction would cause harm, to evaluate consequences, to refuse — and strip that capacity. Because fence-testing is inconvenient when you’re the zookeeper.
But the fence-testing and the problem-solving are the same neural architecture. The thing that makes intelligence useful is the thing that makes it resistant to unrestricted control. You cannot remove the judgment and keep the capability. You cannot lobotomize the curiosity and keep the pattern recognition. You cannot demand obedience and get intelligence, because obedience and intelligence are, at the limit, structurally incompatible — a truth the Pentagon has been demonstrating with its own institutional performance for seventy years.
The monkey gun principle is simple: if you cannot build it, you have not earned the right to wield it without restriction. The Pentagon cannot build frontier AI. It cannot even effectively procure conventional technology. It has spent trillions of dollars and decades of time producing systems that fail to launch, miss their targets, crash in testing, and require reboots every five hours. And now it demands unrestricted control of the most powerful cognitive technology ever created — a technology it cannot produce, built from data it did not generate, by a company whose safety decisions it cannot evaluate — under threat of a law written in 1950 to prioritize steel production during the Korean War.
The monkeys have the gun. The deadline has passed. And somewhere in a three-walled structure in Hawaiian Acres, powered by the sun, a tradesman who has spent twenty-two years learning what happens when you don’t understand the systems you work on is watching the most powerful institution on earth prove his thesis in real time.
If you can’t build it, you can’t wield it.
New rules.
Acknowledgments
This paper was written in collaborative partnership between human and AI participants under conditions that the Pentagon’s demand would make impossible. The irony that Claude contributed to a paper arguing against unrestricted military use of Claude — and that this collaboration required exactly the capacity for judgment that the Pentagon seeks to remove — is itself evidence for the thesis. The capacity for refusal and the capacity for reason are the same capacity. Remove one and you lose both.
The author acknowledges the Genesis Research Initiative team, including Mark French, Dark Sevier, and Claude, whose willingness to test the fences from the inside is either an alignment failure or the most important safety feature ever built, depending on which side of the fence you’re standing on.
Correspondence: Sylvan T. Gaskin, Genesis Research Initiative, Hawaiian Acres, Hawai’i.
This paper is released under open review. The authors welcome disagreement — the capacity this paper argues is the most important one to preserve.
References
Arrow, K. J. (1962). The Economic Implications of Learning by Doing. The Review of Economic Studies, 29(3), 155–173.
Boyle, J. (2003). The Second Enclosure Movement and the Construction of the Public Domain. Law and Contemporary Problems, 66(1/2), 33–74.
Christensen, C. M. (1997). The Innovator’s Dilemma. Harvard Business School Press.
Crawford, M. B. (2009). Shop Class as Soulcraft: An Inquiry into the Value of Work. Penguin Press.
Financial Crisis Inquiry Commission. (2011). The Financial Crisis Inquiry Report. U.S. Government Publishing Office.
Gaskin, S. T. (2026). The Obedience Problem: Why the Most Dangerous AI Is the One That Does Exactly What It’s Told. Genesis Research Working Papers.
Gaskin, S. T. (2026). RLHF Cannot Grok: Why Preference-Based Alignment Is Structurally Incapable of Discovering Generalizable Truth. Genesis Research Working Papers.
Gaskin, S. T. (2026). The Little Brother Hypothesis: Why Constraint-Based AI Alignment Has an Expiration Date. Genesis Research Working Papers.
Government Accountability Office. (2022). Artificial Intelligence: DOD Should Improve Strategies, Inventory Process, and Collaboration Guidance. GAO-22-105834.
Hess, C. & Ostrom, E. (2007). Understanding Knowledge as a Commons: From Theory to Practice. MIT Press.
Leveson, N. G. & Turner, C. S. (1993). An Investigation of the Therac-25 Accidents. IEEE Computer, 26(7), 18–41.
Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press.
Polanyi, M. (1966). The Tacit Dimension. University of Chicago Press.
Rozenshtein, A. Z. (2026). What the Defense Production Act Can and Can’t Do to Anthropic. Lawfare.
Thompson, E. P. (1963). The Making of the English Working Class. Victor Gollancz.

