Estimated reading time at 200 wpm: 21 minutes
Humanity has always lived in the shadow of its own ingenuity. Every leap in technological power has widened the gap between what we can do and what we can wisely control. Today, that ancient asymmetry has become the defining tension of our age. We are accelerating into a future shaped by artificial intelligence, synthetic biology, quantum computation, and cyber‑physical systems—tools of astonishing capability, yet governed by safety frameworks that lag centuries behind our ambitions.
Whether or not you agree our Fat Disclaimer applies
This imbalance is no longer an academic concern. It is a structural vulnerability woven into the fabric of modern civilisation. As our “engines” of innovation roar ahead, our “brakes”—ethical foresight, regulatory restraint, and institutional wisdom—remain perilously underdeveloped. The result is a world where a single miscalculation, a rogue actor, or an unforeseen interaction between complex systems could trigger consequences far beyond our ability to contain.
This blog explores that widening chasm between power and control. It traces the dilemmas that shape our technological trajectory, the risks emerging from domains like biotechnology and cyber‑physical infrastructure, and the strategic frameworks—such as Differential Technology Development and Defensive Accelerationism—that may offer a path toward a safer, more resilient future. In an era where the tools of creation and destruction are becoming democratised, understanding this landscape is not optional. It is the precondition for survival.

Part 1: Foundations and Biological Risks
I. The Core Concept: Power Without Control
The history of human progress is defined by a persistent asymmetry: our ability to manipulate the physical world (power) invariably outpaces our ability to predict the consequences or regulate the usage of that power (control). As we enter an era of exponential technologies, this gap is no longer merely a source of inefficiency; it is a source of existential risk.
The Collingridge Dilemma
This tension is best formalised by the Collingridge Dilemma, a methodological paradox that plagues the regulation of technology. It posits two distinct phases in the lifecycle of any innovation:
- The Early Phase: When a technology is in its infancy, it is malleable and easy to regulate. However, at this stage, its societal impacts—and potential dangers—are impossible to foresee with accuracy.
- The Late Phase: By the time the technology’s impacts are clear (and the negative externalities are visible), it has become so entrenched in the economic and social fabric that it is nearly impossible to control or alter.
We are essentially driving a vehicle where the steering wheel is locked the moment the engine reaches full speed.
The “Gap”: Techne vs. Phronesis
The dilemma arises from a fundamental divergence in human capability. We excel at Techne—the practical application of knowledge, the building of tools, and the optimisation of systems. We struggle profoundly with Phronesis—practical wisdom, ethical deliberation, and long-term foresight.
While technical innovation operates on an exponential curve (Moore’s Law), wisdom and ethical evolution operate on a linear, often stagnant, trajectory. We are effectively handing “god-like” powers—nuclear fission, gene editing, artificial intelligence—to institutions and individuals who possess the same emotional and cognitive maturity as their ancestors from the Middle Ages.
Cognitive Blind Spots
Our inability to manage this gap is exacerbated by inherent psychological biases:
- Optimism Bias: We naturally envision the best-case scenario for our inventions. The creators of the internet envisioned a democratised library of human knowledge, largely failing to anticipate the rise of surveillance capitalism or algorithmic polarisation.
- Normalcy Bias: We assume the future will function roughly like the past, merely with faster gadgets. We fail to anticipate “Black Swan” events where a new technology fundamentally alters the structure of reality itself.
- Efficiency over Resilience: Our economic models incentivise speed, cost-reduction, and power immediately. They do not reward resilience, redundancy, or safety until after a catastrophe has occurred.
II. The Strategic Framework: Differential Technology Development (DTD)
If we accept that halting technological progress is impossible (due to economic incentives and geopolitical curiosity), the only viable strategy for survival is Differential Technology Development (DTD). This framework rejects the notion that all innovation is equal.
The Definition
DTD proposes a shift from a “volume knob” approach (simply accelerating or decelerating all progress) to a “mixing desk” approach. The core principle is simple but radical: we must retard the development of dangerous, risk-creating technologies while aggressively accelerating the development of beneficial, protective technologies.
The goal is not stagnation, but safety. We must prioritise the invention of the “shield” before the “sword”.
The Sequencing
In practical terms, this requires a deliberate sequencing of scientific investment.
- Standard Development: We build a faster engine (Power) and better brakes (Control) simultaneously, hoping the brakes keep up.
- Differential Development: We deliberately pour resources into braking technology first. We restrict funding or research into engine speed until we are certain the brakes can handle the increased velocity.
This might mean delaying the release of a more powerful AI model until interpretability tools (which allow us to see inside the “black box”) are fully mature. It means developing antiviral manufacturing capacity before simplifying the tools required to modify viruses.
The Barrier: The Prisoner’s Dilemma
While DTD is logically sound, it faces a formidable implementation barrier: the Prisoner’s Dilemma.
In a competitive geopolitical landscape, nations are disincentivised to slow down. If one country (e.g., the US or UK) decides to retard AI development to focus on safety, they fear a rival (e.g., China or Russia) will accelerate and gain dominance. This dynamic forces all actors to race for “Power” while neglecting “Control”, creating a “Race to the Bottom” where safety is sacrificed for speed.
III. Domain Focus: Biotechnology
Biotechnology represents perhaps the most acute example of the “Power without Control” dynamic. Unlike nuclear physics, which requires massive state infrastructure, biology is becoming increasingly accessible, digitised, and difficult to police.
Dual-Use Risks
The primary challenge in biotech is the “Dual-Use” nature of the research. The precise knowledge required to design a vaccine or cure a rare disease is often identical to the knowledge required to design a pathogen.
This was starkly illustrated by the “Inverted Drug Discovery” experiment. An AI model, originally designed to filter out toxicity in medicines, was simply retasked to maximise toxicity. In less than six hours, it reinvented known chemical warfare agents (like VX gas) and proposed thousands of novel, potentially deadlier compounds. The “engine” of cure was effortlessly reversed into an engine of chaos.
De-Skilling
The convergence of AI and biology is leading to a rapid “de-skilling” of the field. Historically, creating a bioweapon required a team of PhDs and years of tacit knowledge. Today, Large Language Models (LLMs) can troubleshoot complex experiments for amateurs, and “Cloud Labs” allow users to upload digital code that is executed by robots in a remote facility.
This decouples the actor from the physical expertise. A bad actor no longer needs to be a brilliant scientist; they simply need to be a prompt engineer with access to a credit card.
The Multiplier: Quantum Computing
Looking 10 to 20 years ahead, Quantum Computing threatens to remove the final friction point: trial and error. Currently, biological simulation is difficult because molecules are complex quantum systems. Classical computers struggle to simulate exactly how a new virus would interact with human cells.
A mature Quantum Computer could theoretically simulate these interactions perfectly in silico (on the computer). This would allow a weapon designer to “test” millions of viral variations and optimise for lethality or transmissibility without ever stepping foot in a wet lab, drastically accelerating the development cycle and bypassing physical surveillance.
Part 2: Cyber-Physical Systems, Rogue Actors, and Future Risks
IV. Domain Focus: Cyber-Physical Systems (CPS) & IoT
If the internet was originally designed to move information (bits), Cyber-Physical Systems (CPS) are designed to move reality (atoms). This domain represents the point where code gains “muscles”, controlling critical infrastructure such as power grids, water treatment plants, autonomous vehicles, and medical devices.
Kinetic Cyber Attacks
The danger in this domain is not data theft, but physical destruction—often termed “Kinetic Cyber Attacks”. When the “control” mechanisms of these powerful systems are compromised, the consequences are immediate and tangible.
- Stuxnet (The Precedent): The Stuxnet malware, which targeted Iranian nuclear centrifuges, demonstrated that digital code could physically destroy machinery. It spun centrifuges to supersonic speeds while feeding false “normal” data to monitoring screens—a perfect example of removing control while maintaining the illusion of order.
- Triton (The Safety Override): Perhaps more alarming was the Triton malware (2017), which specifically targeted the Safety Instrumented Systems (SIS) of a petrochemical plant. It attempted to disable the automated “brakes” designed to prevent explosions, signalling a shift from disrupting operations to maximising physical harm.
- The Ukraine Power Hacks: The remote seizure of Ukraine’s power distribution centres in 2015 and 2016 left hundreds of thousands in the dark. Hackers not only cut the power but also disabled backup generators and flooded call centres, orchestrating a systemic collapse.
The Internet of Things (IoT) Vulnerability
The challenge is compounded by the Internet of Things (IoT), which has exploded the “attack surface” available to bad actors. We have flooded homes and cities with billions of connected devices—from smart thermostats to industrial sensors—often prioritising connectivity over security.
- The Search Engine for Devices: Tools like Shodan allow attackers to scan the globe not for websites, but for specific unsecured hardware. A hacker can locate a water pump or a webcam with default passwords in seconds.
- The Mirai Botnet: The lack of control in IoT was made visible by the Mirai botnet, which enslaved hundreds of thousands of insecure cameras and DVRs to launch massive distributed attacks. The infrastructure of the internet was taken down not by supercomputers, but by inexpensive, unsecured consumer electronics.
Regulatory Response
To regain control, governments are shifting from voluntary guidelines to mandatory standards.
- The “Nutritional Label” for Tech: Initiatives like the US Cyber Trust Mark aim to make security visible to consumers, much like energy efficiency ratings.
- The “Law” of the Land: The EU’s Cyber Resilience Act forces manufacturers to prove security by design and maintain it throughout the product’s lifecycle, attempting to close the gap between the release of “power” (the device) and the assurance of “control” (security updates).
V. The Problem of “Dark Corners” (Rogue Actors)
Law and regulation function on the premise of deterrence: the threat of punishment. However, deterrence fails when the actor has no “flesh to bite”—no legal standing, no physical headquarters, and no public reputation. Rogue factions, non-state actors, and decentralised criminal groups operate in the “dark corners” of the geopolitical map.
The “No Flesh” Problem
Traditional statecraft relies on sanctions or diplomatic pressure. These tools are useless against a decentralised network of anonymous actors using encrypted channels. If we cannot punish the person, we must instead control the physics they rely on. The strategy shifts from Deterrence (punishing the actor) to Denial (breaking the tool).
The “Choke Point” Strategy
Even the most digital rogue actor requires physical resources. Identifying and controlling these bottlenecks is the “Choke Point” strategy.
- Compute Governance: Training a frontier AI model requires thousands of specialised GPUs and massive energy consumption. By strictly monitoring the supply chain of high-end chips, the international community can prevent “dark” data centres from developing dangerous capabilities.
- The DNA Supply Chain: Rogue biologists cannot code a virus into existence; they need the physical genetic material. “Know Your Customer” regulations for DNA synthesis aim to ensure that synthesis machines verify the identity of the user and screen the requested sequence for toxicity before printing.
Zero Trust and AI Immunosystems
Since we cannot keep bad actors out of the network entirely, we must assume they are already inside.
- Zero Trust Architecture: This security model abolishes the idea of a “trusted internal network”. Every request—whether from the CEO or a smart bulb—is verified before access is granted, limiting the “blast radius” of any breach.
- The AI Immunosystem: We are moving toward automated defences that monitor behaviour rather than identity. An AI system managing a power grid does not need to know who is sending a command to open all breakers; it only needs to recognise that the command itself is catastrophic and block it instantly.
VI. Future Forecasting (10–50 Years)
As we look further ahead, the “Power without Control” dynamic becomes existential. The risk is not just that technology improves, but that the capacity for mass destruction becomes accessible to individuals.
The Democratisation of Destruction
Technological progress generally lowers barriers to entry. In 2050, the tools to create a biological agent or a cyber-weapon may be as accessible as photo-editing software is today.
- The “Script Kiddie” of Biology: We face the prospect of “Bio-Script Kiddies“—amateurs who download and print dangerous biological agents using AI assistance, without fully understanding the consequences.
- The Black Ball Hypothesis: This scenario, proposed by Nick Bostrom, suggests that if a technology exists that is easy to produce (like a “Black Ball” pulled from an urn) but impossible to defend against, civilisation is statistically doomed because the “barrier to use” drops to the lowest moral threshold in society.
The Evolution of Motives
We must also anticipate the evolution of “magnified human evil”.
- Omnicidal Nihilists: Unlike traditional terrorists who have political demands, future actors may seek destruction for its own sake.
- Eco-Extremists: Well-resourced factions may view humanity as a planetary virus and use advanced biotech to engineer a “correction” or sterilisation event.
- Accidental Sorcerers: Corporate entities, in a blind race for profit, may deploy autonomous systems that “optimise” safety buffers out of existence, causing systemic collapse without malice.
The Vulnerable World Hypothesis
This trajectory leads to a terrifying fork in the road known as the Vulnerable World Hypothesis. If the offence (rogue actors) becomes too powerful, the only way for the defence (civilisation) to survive may be radical, invasive surveillance.
We risk being forced to choose between Extinction (allowing freedom but risking the “one miss”) and Dystopia (surviving under a “Panopticon” that monitors every keystroke and DNA print job). The goal of current safety research is to find a “Third Way”—building safety into the physics of the technology itself so that totalitarian control is not the only alternative to catastrophe.
Part 3: The Investment Gap and Defensive Solutions
VII. The Investment Gap (The Pacing Problem)
We are currently witnessing a dangerous disparity where the “engine” of technological capability is being funded with trillions of dollars, while the “brakes” of safety and control receive a negligible fraction of that investment. This is the Pacing Problem monetised: we are building the future faster than we can afford to secure it.
The Imbalance: Tracking the Money
To understand the trajectory of risk, one need only follow the capital.
- The Engine (Capabilities): Private investment in AI capabilities exceeds $100 billion annually, with the market projected to reach trillions within the decade. The economic incentive is entirely focused on making models faster, smarter, and more autonomous.
- The Brakes (Safety): Estimates suggest that investment in purely “defensive” or “safety” research sits at roughly 1–2% of development budgets. We are effectively betting the stability of the global economy on a 1% safety margin.
- The Cost of Chaos: This is economically paradoxical, given that the projected cost of failure—such as cybercrime damages—is estimated to hit $10.5 trillion annually by 2025. We are underinvesting in the insurance policy for civilisation.
The “Safety Tax”
A major barrier to closing this gap is the concept of the “Safety Tax” (or Alignment Tax). In a free market, safety is inefficient.
- Friction: To make an AI safe, developers must slow down training to evaluate outputs. To make biology safe, labs must pay for expensive DNA screening. To make a power grid safe, operators may need to use manual, analogue switches that are slower than digital ones.
- Market Failure: Without universal regulation, a company that voluntarily pays this “tax” will be outcompeted by a reckless rival who skips safety to move faster. Therefore, the “Safety Tax” must be made mandatory to level the playing field.
Lighting Up the “Dark Corners” (Infrastructure of Control)
If we were to invest the necessary “tremendous effort” to match the scale of the threat, we would be building massive Passive Detection Infrastructure—a global immune system that detects threats without needing to identify the actor.
- The Nucleic Acid Observatory (NAO): A proposed global network that continuously sequences wastewater at major airports and cities. This would allow us to detect a novel pathogen days after its release, regardless of whether it originated in a state lab or a rogue garage, stripping the attacker of the element of surprise.
- “Geiger Counters” for Compute: Just as we track radioactive material, we must track the hardware of intelligence. This involves embedding un-hackable monitoring chips into high-end GPUs to detect if a “dark” data centre suddenly begins consuming energy in a pattern consistent with training a dangerous weapon.
- The Digital “NORAD”: A publicly funded Cyber Defence Corps tasked not with retaliation, but with finding and patching zero-day vulnerabilities in critical open-source software before they can be weaponised.
The Missing Institution
For nuclear energy, the world recognised the need for a watchdog with teeth and created the IAEA (International Atomic Energy Agency). Currently, there is no “IAEA for AI” and no “IAEA for Biotech”. There is no global body with the authority to inspect a data centre in a non-cooperative country or verify the safety of a biological experiment. Until such institutions exist, the “dark corners” remain unpoliced.
VIII. The Proposed Solution: Defensive Accelerationism (d/acc)
In response to these existential risks, a new strategic philosophy has emerged: Defensive Accelerationism (d/acc).
It rejects both the naivety of “Effective Accelerationism” (e/acc), which ignores risk, and the impracticality of “Deceleration” (Decel), which attempts to stop progress. Instead, d/acc argues that since we cannot stop the development of swords, we must aggressively accelerate the development of shields.
The Core Philosophy: “Shields First”
The goal is to shift the world from an Offence-Dominant equilibrium (where it is easier to destroy than to protect) to a Defence-Dominant one. We must pave the technological track with safety foam so that even if a “Black Ball” is pulled, its impact is dampened.
The d/acc “Tech Stack”
To neutralise the “Dark Corners”, d/acc proposes three technological revolutions:
- The Bio-Shield (Making Pandemics Impossible)
We must alter the physical environment to be hostile to pathogens.
- Far-UVC Lighting: Installing 222nm light fixtures in public spaces (airports, schools, transport) which are safe for humans but lethal to airborne viruses. If a rogue actor releases a plague, it physically cannot spread, rendering the weapon useless without needing to arrest the attacker.
- Rapid Response Manufacturing: Building “warm base” manufacturing capacity that can produce and distribute a new vaccine within weeks, not months.
- The Truth Shield (Cryptography & ZK-Proofs)
Rogue factions will exploit deepfakes and misinformation to cause panic.
- Zero-Knowledge Proofs (ZK-Proofs): Mathematical protocols that allow verification without revealing sensitive data.
- Digital Signatures: A shift from “trust me” to “verify me”. Cameras and microphones could cryptographically sign their content at the hardware level. A deepfake video lacking this signature would be instantly flagged by browsers as “unverified”, neutralising its psychological impact.
- The Cyber-Shield (Formal Verification)
Currently, critical code is written by humans and patched later.
- Formal Verification: Using AI to mathematically prove that a piece of software (like a nuclear plant controller) is free of vulnerabilities. This moves cybersecurity from a game of “whack-a-mole” to a state of mathematical certainty, making kinetic cyber attacks exponentially harder to execute.
Pan-Probabilism: Resilience Over Prediction
Finally, d/acc embraces Pan-Probabilism. We cannot predict exactly which “magnified human evil” will emerge in 2050. Therefore, the best investment is Resilience. A society with clean air systems (Far-UVC), verified information channels (ZK-Proofs), and un-hackable infrastructure (Formal Verification) is robust against any shock, whether it be a bio-weapon, a cyber-nuke, or a solar flare.
Conclusion
The journey from the “Collingridge Dilemma” to “Defensive Accelerationism” reveals a stark truth: we cannot un-learn the knowledge of how to destroy. The genie is out of the bottle. However, by prioritising Phronesis (wisdom) over Techne (power), and by aggressively funding the technologies of control, we can build a world that is not just faster, but safer. The race is no longer against each other; it is against our own capacity for chaos.
Part 4: The Synthesis – A Race Against Entropy
The Great Asymmetry
We stand at a singular moment in the human story, suspended between the grandeur of our invention and the fragility of our wisdom. The central thesis of our time is not political, nor purely economic; it is the widening chasm between Power and Control. We have perfected the art of the “engine”—building systems of infinite leverage, from nuclear fission to synthetic biology and artificial intelligence. Yet, we have neglected the “steering”—the philosophical, ethical, and regulatory frameworks required to navigate the velocity we have created.
This asymmetry, often formalised as the Collingridge Dilemma, has left us in a precarious state of acceleration. We are driving into the dark, pressing the accelerator because our economic systems demand growth, yet fearing the turn ahead because our safety systems are archaic. We possess the Techne of gods—the ability to rewrite the code of life and split the atom—but we retain the Phronesis of our medieval ancestors, struggling with tribalism, short-termism, and a profound inability to conceptualise exponential risk.
The End of Neutrality
For centuries, we comforted ourselves with the idea that technology is neutral—that a hammer can build a house or crack a skull, depending solely on the user. That comfort is now obsolete. As we approach the “Black Ball” scenarios—technologies that are cheap to produce, impossible to contain, and catastrophic in impact—neutrality evaporates.
In a world of “democratised destruction”, where a single rogue actor in a “dark corner” can access the lethality of a nation-state via a laptop or a DNA printer, the tool itself becomes a vector of existential risk. The “Unilateralist’s Curse” dictates that the person with the lowest moral threshold sets the danger level for the entire species. Therefore, we can no longer afford to simply punish the actor after the fact; we must fundamentally alter the architecture of the tool.
The Architecture of Survival
This realisation demands a radical shift in our civilisational strategy, moving from Deterrence to Denial. We cannot police every basement, garage, or server farm in a decentralised world. We cannot arrest a ghost. Instead, we must embrace Differential Technology Development (DTD) and Defensive Accelerationism (d/acc).
We must accept the “Safety Tax” as the cost of survival. This means deliberately slowing the “engine” of pure capability while aggressively accelerating the “brakes” of control. It means acknowledging that the free market, left to its own devices, will always choose speed over safety, and thus, regulation must intervene to alter the physics of the marketplace.
The solution lies in building a world that is Defence-Dominant. We must construct a global immune system—not of soldiers, but of sensors and shields.
- The Biological Shield: Transforming our physical environment with Far-UVC and passive monitoring so that pandemics, whether natural or engineered, wither before they can spread.
- The Digital Shield: Rewriting the foundations of the internet with Formal Verification and Cryptography, creating a reality where lies cannot propagate and critical systems cannot be breached.
- The Institutional Shield: Establishing the “IAEA for AI”, a global watchdog with the teeth to illuminate the dark corners of compute and research.
The Final Choice
Ultimately, the trajectory of “Power without Control” leads us to a fork in the road.
One path leads to The Panopticon. If we fail to secure the technology itself, the only remaining way to prevent the “one miss” is to secure the people—by monitoring every keystroke, every genetic sequence, and every private thought. This is the “Vulnerable World” scenario, where we trade our freedom for survival, living under the permanent gaze of a totalitarian AI designed to suppress human volatility.
The other path—the path of Defensive Accelerationism—offers a way out. By embedding safety into the very laws of our digital and physical reality, we can strip the “Black Ball” of its power. We can create a world where the rogue actor is not arrested, but rendered impotent; where the virus does not infect; where the cyber-weapon bounces off the hull of a verified grid.
This is the race against entropy. It is not a race to build the fastest machine, but to build the most resilient one. We cannot un-learn the fire; we can only build a hearth that can hold it. The gap between Power and Control is the defining challenge of our species, and closing it is the only way we earn the right to the future.


