Ghosts from the Machines
Technocracy bulletin
Three weeks ago, rumors began circulating that Israeli Prime Minister Benjamin Netanyahu had been killed in his country’s escalating war with the Islamic Republic of Iran after his early-March video address appeared to show him with six fingers on his right hand. Various “proof-of-life” videos followed purporting to show him alive, but gave us a further cascade of alleged anomalies: coffee foam that remains unchanged after he takes a sip; a jacket pocket that snaps back too cleanly; a wedding ring that flickers in and out of existence; an extra ear canal; a stuttering shirt sleeve. Meanwhile, claims about the provenance of footage for the Jerusalem café setting and for the cabinet meeting offer some potential origin of source material with which AI might have generated some of the above videos.
(Perhaps to his own surprise, Netanyahu’s son, Yair, provided additional fuel for these speculations when, on 8 March, he abruptly stopped posting on X for a period of seven days: unusual behavior for a user with more than one hundred thousand posts since starting his account in June 2017, and a period of inactivity matching the Jewish mourning tradition of sitting shiva.)
While internet sleuths offer compelling observations, these might yet remain artifacts of compression, motion blur, camera settings, or simple misperception. Even the invocation of AI detection tools—reporting high “likelihood” scores—offers little firm ground, given their well-documented instability and susceptibility to false positives, as when one reportedly flagged the Gettysburg Address as AI-generated. Our own opinion, then, remains only a posture: agnostic, provisional, and contingent on the emergence of verifiable, high-fidelity evidence that has not yet materialized.
However, the question of whether Netanyahu died represents not just a factual inquiry, but a case study in epistemic collapse. Viewers dissect frames for anomalies while counterarguments invoke compression artifacts, camera limitations, and the human tendency to over-interpret ambiguous visuals. Each attempt at proof generates a corresponding wave of skepticism, and each attempt at debunking feeds the cycle further. The result is not consensus but fragmentation, with even relatively sophisticated observers arriving at an agnostic position: that the available evidence, whether authentic or artificial, no longer carries sufficient authority to settle the question. In this telling, the most significant development is not the status of the man himself, but the apparent erosion of any shared standard by which such a status could be conclusively determined.
Understanding that, the risks exposed by this episode extend well beyond Netanyahu and into the structural stability of the media ecosystem itself. In the near term, the proliferation of plausible synthetic media accelerates the erosion of public trust—any more of which the U.S. certainly can’t afford—particularly when authoritative confirmation is delayed, fragmented, or perceived as unreliable.
Over the longer horizon, the implications grow more severe: as we’ve been warned since 2018—and particularly during the 2020 and ’24 presidential election cycles—electoral systems have become increasingly vulnerable to deepfakes and coordinated misinformation campaigns (besides those embodied in political campaigns themselves, that is), while the unchecked expansion of AI infrastructure introduces parallel governance challenges, from environmental strain driven by data center resource consumption to the absence of clear regulatory boundaries. What emerges is not a single point of failure, but a layered vulnerability—informational, political, and material—whose effects compound over time.
Interestingly, these increasing vulnerabilities to the political sphere from AI-generated content come paired with recent pushes to integrate AI more directly into governance. In January 2025, U.S. President Donald Trump launched a sweeping restructuring of the federal government through a series of executive orders aimed at reversing prior policies, freezing hiring, mandating a return to in-person work for federal employees, withdrawing from international agreements, and initiating workforce reductions under the so-called Department of Government Efficiency (DOGE), an advisory board instead of an official U.S. government department established by Congress.

Central to this effort was the accelerated adoption of AI-driven “algorithmic governance,” promising increased efficiency but also raising profound concerns: as government functions become dependent on data systems and private-sector infrastructure, power shifts toward tech firms, institutional capacity within the state erodes, and decision-making risks being automated beyond meaningful oversight. Early examples—such as algorithmic tools overriding medical judgments—suggested both practical harms and systemic vulnerabilities, while the broader trajectory points toward a deepening fusion of state and corporate power (i.e., fascism), potential displacement of large portions of the federal workforce, and even speculative futures in which digitally governed “network states” challenge traditional democracy. In this light, the transition is less a technical upgrade than a structural transformation toward technocracy—also apparent in other initiatives of the second Trump Administration, as we outlined in a bulletin last year—with long-term implications for accountability, sovereignty, and democratic governance.
Moreover, such a transition to algorithmic governance may only introduce further dimensions of dishonesty into modern political life. Terrence J. Sejnowski’s “Large Language Models and the Reverse Turing Test” (2023) aargues that modern large language models (LLMs) represent a major advance in generating human-like text—but also expose a critical weakness: their inherent tendency to produce false or misleading information with confidence. Because they rely on statistical pattern prediction rather than grounded knowledge, they can fabricate facts or reasoning without detecting errors. Rather than possessing true understanding, LLMs operate by predicting likely word sequences based on statistical patterns in their training data. This means they can generate outputs that are fluent, coherent, and persuasive even when they are factually incorrect—a phenomenon often described as “hallucination.”
Drawing on parallels to neuroscience, Sejnowski emphasizes that LLMs lack grounding in the real world: they do not verify claims, access truth directly, or maintain stable internal models of reality. Instead, they assemble plausible responses, which can include fabricated citations, incorrect reasoning, or invented facts—especially when prompted beyond the limits of their training. This, of course, creates practical risks in domains like medicine, law, and education, where confident but incorrect outputs can mislead users who assume reliability based on linguistic fluency. Accordingly, the danger of LLMs is not simply that they make mistakes, but that they make them in ways that are difficult to detect. Their outputs exploit human cognitive biases—particularly our tendency to equate articulate language with competence—thereby increasing the likelihood that users will trust and act on erroneous information.
While Sejnowski’s warning concerns the epistemic layer—the reliability of what we are told—then the next question is what happens when that unreliable layer becomes embedded within systems of power and access. That, unfortunately, seems the likely result of algorithmic governance in the context of proposals to expand identity verification laws and the introduction of digital IDs, two converging trends with the potential to transform the internet into a highly controlled, identity-based system.
Last year, the U.S. Supreme Court upheld a Texas law requiring websites that host pornographic content to verify users’ ages—typically through government IDs or third-party verification—in order to block minors from access. While supporters argued that improved technology makes such checks feasible and comparable to in-person ID requirements, critics warned the law raises serious concerns about privacy, data security, and free speech, with verification systems also risked exposing sensitive personal information and restricting access to legally protected content. The decision set a broad precedent, potentially expanding similar laws nationwide and reshaping how identity verification is enforced across the internet. Since that ruling, seven more states joined the eighteen with existing age verification laws, with California scheduled to introduce its own next year.
Here, the U.S. is catching up to other Western nations. Last year, the United Kingdom began requiring all pornographic websites and apps to implement robust age verification measures under its Online Safety Act, replacing simple self-declaration with methods like facial recognition, digital IDs, or banking checks to prevent minors from accessing harmful content. Meanwhile, the European Union has begun implementing an age verification system with digital identity wallets (EUDI Wallets) to let users prove they meet age requirements—such as being over 18—through privacy-preserving, cryptographic credentials that avoid sharing full personal data. Currently being piloted across several EU countries, the system is expected to scale as part of a broader rollout of digital identity infrastructure across Europe.
As digital IDs become increasingly mandated, businesses and governments will have strong incentives to require them for access to online and even physical spaces, creating a “licensed” and gated environment. This shift would erode privacy, enable pervasive tracking, and undermine anonymous speech, as users become permanently tied to their real-world identities. Accordingly, without strong legal and technical safeguards, this emerging infrastructure risks locking society into a system of constant surveillance and restricted access to information.
The problem, then, is not merely that algorithmic systems can generate convincing falsehoods, but that these same systems are increasingly being positioned to mediate who is allowed to speak, see, and participate at all. Of course, the irony shouldn’t be lost on us that governments would introduce them to, in part, prevent their citizens from doing precisely what the State of Israel—which already has a digital ID to access government services—appears to have done with its recent releases of Netanyahu’s dubious “proof-of-life” videos.
If the twentieth century confronted citizens with the problem of propaganda—falsehoods injected into an otherwise legible reality—the twenty-first increasingly confronts us with something more disorienting: a condition in which reality itself becomes procedurally unstable. Not merely distorted, but continuously reconstituted through systems that neither guarantee truth nor remain accountable to it. In such an environment, the question “what happened?” yields less to investigation than to interpretation, and interpretation itself becomes subject to manipulation, amplification, and constraint.
At the same time, as certainty dissolves, systems of control are hardening. Identification regimes expand, access narrows, and participation becomes more tightly regulated—even as the informational substrate those systems depend on grows less trustworthy. This inversion is worth dwelling on: truth becomes harder to verify even as authority demands more verification from us. The issue is no longer simply whether something is real, but who has the authority to determine that reality—and on what basis.
Accordingly, we arrive at a paradox. As synthetic media makes it more difficult to believe what we see, emerging identity infrastructures make it increasingly impossible to opt out of being seen. We are given less reason to trust, while becoming more exposed. The result is not clarity, but enclosure: an environment in which uncertainty about the world coexists with unprecedented certainty about the individual. On one axis, systems generate persuasive but ungrounded information; on the other, systems determine who may speak, what may be seen, and under what conditions. Together, they form a feedback loop in which the erosion of trust justifies greater control, and greater control further centralizes the production and validation of reality.
Preserving any semblance of personal liberty under these conditions will require more than technical fixes or regulatory adjustments. It will demand a renewed commitment to the conditions that made truth politically meaningful in the first place: the ability to speak without permission, to access information without credentialing, and to question authority without being absorbed into its systems. Without such commitments, we risk arriving at a future in which everything is verified, nothing is trusted, and the distinction between reality and its simulation ceases to matter—not because it has been resolved, but because it has been rendered irrelevant. In that world, it no longer matters what’s true, but what we’re allowed to see and believed. The most important question becomes not what happened, but who has the power to decide what counts as having happened—and, within the framework of algorithmic governance, whether that power remains the responsibility of any human beings, let alone accountable to anyone or anything at all besides the spirit in which this system has been designed.




Your Corporatarchs image reminds me of a book Asimov's Foundation Trilogy: The Merchant Princes. This book represents a thematic non-violent power shift from conquest by sword present during the time of the Roman Republic under Nero. Once Rome had cemented it's control in modern day Britain via conquest by Elephant and the crossing the English channel, conquest by engineering and required infrastructure maintenance followed - aqueduct and road construction projects. Rome essentially systematically bankrupted England and Wales by requiring they build these aqueducts using Roman engineering while taking out huge loans. After England couldn't pay up, the Roman Senate determined a bankrupted territory and subject of Rome had zero stance to argue a case for their own sovreignty, thus it was economically annexed. I'm wondering how the renneiassance of AI will redistribute power and control over contested territories - access to semiconductors, precious metals, viable trade routes for the aforementioned? How will potential media control over the informational sphere encapsulating geopolitics will shift from something resonable to believe or not due to AI? It appears we're seeing a piece of the misinformation market everyday due to AI.