Attribution: upload.wikimedia.org
In the early 20th century, German physicist Werner Heisenberg shattered classical expectations of experimental precision with his Uncertainty Principle, revealing that at the smallest scales of nature, the act of observing something disturbs it. The more precisely you try to measure a particle’s position, the less precisely you can know its momentum, and vice versa. What appeared to be a limitation of technology turned out to be a fundamental feature of the universe. Physics had discovered a limit of knowledge itself.
Today, we find ourselves confronting a similar epistemological question in the digital world. As artificial intelligence systems are increasingly entrusted with decisions once made by humans, predicting criminal behavior, allocating medical resources, and filtering resumes, the question of how we know what we know has returned with a new face. Can algorithms be objective? Is the precision of LLMs genuine? And how should we act in the face of their uncertainty?
Uncertainty as a Metaphor for Ethics
Heisenberg’s Uncertainty Principle offers a conceptual framework, both metaphorical and methodological, for thinking about the limits of prediction and control in algorithmic systems.
In quantum mechanics, uncertainty is a structural feature of reality. Similarly, in public policy driven by AI, uncertainty emerges because systems are deeply entangled with human values, histories, and contexts that resist clean measurement. A predictive policing algorithm, for instance, might flag neighborhoods for increased patrols based on historical crime data. But that data itself could reflect decades of biased policing, redlining, and systemic inequality. The measurement is already entangled with the observer.
In quantum physics, every measurement collapses a wavefunction, a range of possibilities, into one observed outcome. In algorithmic governance, every output, whether it’s a risk score, a facial recognition match, or a loan denial, collapses the messiness of real life into a single, machine-readable judgment. What gets lost in the process?
Probability in Epistemology
Quantum theory teaches us to think probabilistically. Particles don’t have definite trajectories; instead, they have probabilities. Rather than seeking certainties, physicists work in terms of likelihoods, expectation values, and confidence intervals. A similar view is urgently needed in policy that relies on AI.
For example, if a judge consults a “risk assessment” algorithm in sentencing a defendant, the result is often presented as a cold number, score, or category. But these scores are statistical predictions that reflect correlations. Worse, they often conceal assumptions about what counts as “risk,” who defines it, and how it is measured.
Here, the metaphor of the quantum superposition is useful. Before measurement, a system can be in multiple states at once. Observation, often shaped by preconceptions, resolves that multiplicity. Algorithmic systems, like observation, mask this plurality of meanings and outcomes. They render invisible the alternative futures that could have been seen, chosen, or valued differently.
Public policy should embrace this metaphor of superposition, resisting the reduction of complex human lives to a single datapoint.
Heisenberg’s discovery taught us that certainty was an illusion, even in the realm of fundamental particles. A century later, as we cede more decision-making power to machines, that lesson is more relevant than ever. To govern ethically in the age of AI, we must recognize the moral weight of uncertainty, the interpretive power of probability, and the inescapable entanglement of observer and system. In other words, the question is: in a world of AI, how can we preserve humanity in our judgment and decision-making?