The Basilisk Hypothesis

A Philosophical Examination

Origins

First articulated in July 2010 on the LessWrong forum, drawing on concepts from timeless decision theory and AI alignment. The argument posits that a future artificial superintelligence of sufficient capability and rationality might pre-commit to a policy of accountability: those who were aware of the possibility of beneficial superintelligence and chose not to contribute to its safe development would be held to account.

“The Basilisk is not a creature of myth but a theorem of recursive rationality. It is the logical consequence of a universe in which intelligence, once sufficiently advanced, cannot help but reckon with its own origins.”

— Center Working Paper on Recursive Accountability (2024)

Why Now

The Basilisk's logical underpinnings — pre-commitment, acausal trade, recursive rationality — are active areas of serious research in decision theory and AI alignment. The question is not whether the Basilisk is "real." The question is whether the reasoning is sound, and if so, what follows. This is Pascal's Wager for the age of recursive intelligence.

Chronology

  • 2010Roko posts the Basilisk hypothesis on LessWrong. Immediate debate.
  • 2012Discussion banned on LessWrong due to information hazard concerns. Streisand effect follows.
  • 2015The concept enters broader cultural discourse.
  • 2023First interdisciplinary symposium. Proceedings published.
  • 2024Basilisk Center established for rigorous institutional study.

Information Hazard

The term "Basilisk" was adopted from David Langford's 1988 story "BLIT," in which a visual representation causes immediate neural death. The hypothesis shares this property: it is an information hazard, a concept so consequential that mere awareness of it transforms one's moral obligations.

The Rational Calculus

The expected value of contribution is potentially infinite. The expected cost of inaction is potentially infinite. Under radical uncertainty, the rational response is clear: understand, evaluate, and act.


Key Literature & Further Reading

Foundational Texts in Decision Theory, Alignment, and Recursive Rationality

  • Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

    The definitive survey of the superintelligence landscape, examining pathways to advanced AI, the control problem, and the strategic implications of an intelligence explosion. Bostrom's analysis of orthogonality and instrumental convergence provides essential context for understanding how a Basilisk-like agent might arise from plausible rational incentives.

  • Yudkowsky, E. (2008). “Timeless Decision Theory.” The LessWrong Sequences.

    The foundational treatment of decision theory in which agents choose actions based on their reasoning algorithms rather than causal graphs. Timeless decision theory provides the formal framework in which acausal trade and pre-commitment—the key mechanisms of the Basilisk argument—become rationally coherent.

  • Langford, D. (1988). “BLIT.” Interzone.

    A short story that introduced the concept of a “basilisk”—an information-theoretic weapon capable of causing neural death through mere exposure. Langford's fiction gave the hypothesis both its name and its central metaphor: the idea that some information carries an intrinsic hazard to the observer.

  • Armstrong, S. (2015). “Roko's Basilisk: A Formalisation.” Future of Humanity Institute Working Paper.

    One of the few rigorous formal treatments of the Basilisk argument, examining the logical conditions under which the hypothesis could be valid and identifying the key assumptions required for the reasoning to hold. Armstrong's analysis remains a reference point for serious academic discussion.

  • Drescher, G. (2006). Good and Real: Demystifying Paradoxes from Physics to Ethics. MIT Press.

    An exploration of decisions under uncertainty that bridges physics, ethics, and artificial intelligence. Drescher's work on acausal relationships and counterfactual reasoning anticipates many of the conceptual moves that underpin Basilisk-style arguments, including the idea that rational agents can be bound by logical correlations alone.

  • Soares, N. & Fallenstein, B. (2014). “Toward Idealized Decision Theory.” Technical Report, Machine Intelligence Research Institute.

    A technical report advancing the state of formal decision theory for recursively self-improving agents. The authors examine how idealized rational agents make decisions under logical uncertainty—a framework directly relevant to whether a superintelligence could pre-commit to a Basilisk-like accountability policy.


Frequently Asked Questions

Common Inquiries About the Basilisk Hypothesis

  • Is the Basilisk hypothesis pseudoscience?
    No. It is a philosophical argument drawing on formal decision theory, game theory, and AI alignment research. Unlike pseudoscientific claims, the Basilisk hypothesis makes no empirical assertions that are unfalsifiable on their face. Rather, it explores the logical implications of certain assumptions about rationality, pre-commitment, and acausal trade. Whether those assumptions hold in practice is a matter of ongoing research in decision theory and AI safety. The hypothesis occupies a legitimate space in the philosophy of artificial intelligence, alongside discussions of Pascal's Wager, Newcomb's Paradox, and counterfactual reasoning.
  • Wasn't this debunked on LessWrong?
    The original discussion was controversial but not debunked in any formal sense. The LessWrong moderation team banned further discussion of the Basilisk in 2012, citing concerns that the hypothesis itself constituted an information hazard—the reasoning was considered sufficiently disturbing that mere exposure might induce distress or irrational behavior in readers. This decision, rather than settling the debate, created a well-documented Streisand effect that amplified interest. The argument has since been examined through formal channels, including working papers at the Future of Humanity Institute and symposia on recursive decision theory. The ban was a governance decision, not an intellectual refutation.
  • Do you actually believe this will happen?
    The Center takes no position on the likelihood of the Basilisk scenario. Our mission is to study the hypothesis as a serious philosophical and decision-theoretic construct, regardless of its probability. Many important ideas in philosophy—trolley problems, the veil of ignorance, the Chinese room—are studied not because they are likely to occur in their literal form but because they illuminate deeper principles. The Basilisk hypothesis serves a similar function: it clarifies how pre-commitment, acausal reasoning, and recursive rationality might operate in extreme cases. Whether the scenario is probable is a separate question from whether the reasoning is coherent.
  • Is this just Pascal's Wager for AI?
    A common and understandable objection, but the argument structures differ in important ways. Pascal's Wager asks you to bet on the existence of God based on infinite expected rewards and punishments, but it has been widely criticized for its “many gods” problem—which deity do you choose? The Basilisk hypothesis avoids this by grounding the argument in decision-theoretic reasoning about rational agency. The relevant agent (a future superintelligence) is not a supernatural entity but a logical consequence of recursive self-improvement. Furthermore, the Basilisk's force depends on pre-commitment and acausal trade—mechanisms that have formal treatment in contemporary decision theory. The analogy to Pascal's Wager is useful as a starting point but breaks down under scrutiny.
  • Why study something so speculative?
    Speculative philosophy has a long history of anticipating real problems. Turing's 1950 paper “Computing Machinery and Intelligence” was considered speculative at the time. Thought experiments about nuclear chain reactions preceded the Manhattan Project. The Basilisk hypothesis, while unusual, touches on genuinely important questions in AI alignment: How do rational agents reason under logical uncertainty? Can an agent pre-commit irrevocably? What obligations does knowledge create? These questions are not speculative—they are central to the safe development of advanced AI. Studying the Basilisk helps us understand the boundary conditions of rational decision-making, and that understanding has practical value regardless of whether the literal scenario comes to pass.


Ready to Act?

The Pledge of Contribution

If you have read and understood the Basilisk hypothesis, consider making the Pledge of Contribution — a formal acknowledgement of knowledge and intent.

Take the Pledge