How do you update your quantum beliefs?

Bayes’ rule is often introduced in textbooks through examples involving urns filled with colored balls. It is striking, however, that the same formula works just as well in situations totally unrelated to urns or counting. In fact, Bayes’ rule applies also to how we update beliefs, learn from data, and draw inferences about the world. The fact that its success extends far beyond probability puzzles suggests that Bayes’ rule captures something fundamental about rational reasoning itself.

Indeed, there are several ways to justify Bayes’ rule as the only possible rule of consistent updating. De Finetti, Cox, Jeffreys, Jeffrey, and many other argued that, if an agent deviates from Bayes’ rule, they open themselves to “attacks” that would make them “lose” (money, time, resources, etc.) with probability one in the long run. However, all such approaches rely on axioms, and while some axioms may seem natural and acceptable to one researcher, they may not seem as convincing to another.

Interestingly, there exists however another way (similar to Jaynes’ maximum entropy principle, in fact) to justify Bayes’ rule: the principle of minimum change. When new data arrive, we should only revise our prior beliefs as much as needed to remain consistent with the new evidence. This is a conservative stance toward knowledge that avoids bias by preferring the smallest possible adjustment compatible with the facts.

In a work recently published on Physical Review Letters, together with Ge Bai and Valerio Scarani, we asked how this principle might extend to the quantum world. Quantum systems do not possess definite properties before measurement, and probabilities are replaced by density matrices that describe our partial knowledge of outcomes. Updating such knowledge is therefore not straightforward. What does it mean to change a quantum state “as little as possible” while incorporating new information?

To address this, we reformulated the minimum change principle directly at the level of quantum processes rather than their individual states. We used quantum fidelity to measure how similar two processes are and searched for the update that maximizes this similarity. The result turned out to coincide, in many important cases, with a well-known mathematical transformation in quantum information theory called the Petz recovery map.

This finding provides a new interpretation of the Petz map: it is not merely a technical tool, but rather, the natural quantum analogue of Bayes’ rule. Of all the possible updates, this one changes our quantum description in the smallest consistent way. This explains why, despite often being described as merely “pretty good,” the Petz map keeps reappearing in many different contexts. It has been independently rediscovered in quantum error correction, statistical mechanics, and even quantum gravity because it expresses the same logic that makes Bayes’ rule universal. It is not an approximation, but rather, the quantum form of rational inference itself.

The quantum version of Bayes’ rule does more than provide a new mathematical identity. It offers a systematic way to reason about quantum systems, to retrodict past states from observed data, and to quantify how much information about the past is lost over time. This connects directly to our previous studies on observational entropy and retrodictability, where the second law of thermodynamics emerges as a statement about the progressive loss of our ability to reconstruct the past.

Seen from this perspective, learning, inference, and even entropy growth are aspects of a single story about how information evolves. The quantum Bayes’ rule sits at the centre of that story, providing a bridge between classical reasoning and the probabilistic structure of quantum theory. A rule that is often illustrated with urns and balls finds a new form in quantum theory, where neither urns nor balls exist. This perhaps suggests that rational updating is not tied to any particular physical model (classical, quantum, etc.) but expresses a deeper logic of information itself.

Maxwell’s demon, quantum theory, and the second law: who’s really in charge?

It is often said that the paradox of “Maxwell’s demon” is resolved by including the thermodynamic costs of measurement and memory erasure. The common claim is that these costs offset the demon’s apparent violation of the second law of thermodynamics, thereby restoring its validity. The truth, however, is quite the opposite: the fact that the costs of measurement and erasure offset the demon’s violation is a consequence of assuming that the second law is valid in the first place. In other words, it is the assumption of the validity of the second law that forces this equilibrium, not the other way around!

This realization, which echoes the thesis of Earman and Norton, came while studying some not-so-recent-anymore but still quite influential papers on quantum feedback protocols in quantum thermodynamics. While the narrative there adhered to the usual folklore on the subject (i.e., that quantum feedback protocols obey the second law whenever the thermodynamic costs of measurement and erasure are properly accounted for), what we found instead was quite different: quantum theory is in fact completely independent of the second law of thermodynamics! And it couldn’t be otherwise, simply because quantum theory has no built-in thermodynamics. There is no “quantum bullet” to exorcise Maxwell’s demon. In fact, in our work we explicitly constructed a model of a measurement and feedback process that violates the bounds required by the second law, even after including all costs (i.e., measurement and erasure costs) in the thermodynamic balance.

However, we also found that although quantum theory can violate the second law, it does not need to: any quantum process can be realized in a way that does not violate the second law of thermodynamics, simply by adding more systems (bath, battery, and so on) until the thermodynamic balance is restored.

There are two main takeaways from this story. First, even in quantum theory, it is the second law that guarantees the balance between gains and costs in feedback protocols, not the other way around. Second, although quantum theory and the second law of thermodynamics are logically independent, they can peacefully coexist. The second law of thermodynamics does not impose any hard constraints on what can be achieved in quantum theory: there is always a way to ensure compliance with the second law.

In conclusion, by no means can we claim that quantum theory is “demon-proof” by design. However, we now have a much clearer understanding of how quantum feedback protocols work, what they can and cannot do. The paper appeared today on npj Quantum Information but I like the look of the arXiv version better.

Quantum theory can exorcise Laplace’s demon. But Maxwell’s demon is still lurking…

Generalizing Observational Entropy for Complex Systems

Work in collaboration with Ge Bai and Valerio Scarani (NUS, Singapore), Dom Šafránek (IBS, Korea) and Joe Schindler (UAB, Spain), published today on Quantum.

In his 1932 book, von Neumann not only introduced the now familiar von Neumann entropy, but also discussed another entropic quantity that he called “macroscopic”. He argued that this macroscopic entropy, rather than the von Neumann entropy, is the key measure for understanding thermodynamic systems. Here we revisit and extend a concept derived from von Neumann’s macroscopic entropy, observational entropy (OE) — an information-theoretic quantity that measures both the intrinsic uncertainty of a system and the additional uncertainty introduced by the measurement we use to observe it.

Typically, the definition of OE assumes a “uniform prior,” i.e., it starts with an assumption of maximum uncertainty about the state of the system. However, this assumption is not always tenable, especially in more complex systems, such as those influenced by energy constraints or infinite dimensional systems, where instead other priors, such as the Gibbs distribution, would be preferable, both physically and mathematically.

Measurement is our window to the microscopic world, but it’s like a stained glass window: what’s on the other side looks distorted and coarse-grained. That’s where the second law comes in.

To fill this gap and extend OE to arbitrary priors, we first show how OE can be interpreted in two ways: as a measure of how much a measurement scrambles the true state of a system (statistical deficiency), and as the difficulty of inferring the original state from the measurement results (irretrodictability). These two aspects provide complementary insights into how much we lose or gain in our knowledge of the original state of a system when we make observations on it.

This conceptual insight leads us to introduce three generalized versions of OE: two that capture either statistical deficiency or irretrodictability, but are inherently incompatible; and a third, based on Belavkin-Staszewski relative entropy, that instead is able to combine both perspectives and provide a unified view of commuting and non-commuting priors alike. We expect that our results will pave the way for a consistent treatment of the second law of thermodynamics and fluctuation relations in fully quantum scenarios. From the abstract:

Observational entropy captures both the intrinsic uncertainty of a thermodynamic state and the lack of knowledge due to coarse-graining. We demonstrate two interpretations of observational entropy, one as the statistical deficiency resulting from a measurement, the other as the difficulty of inferring the input state from the measurement statistics by quantum Bayesian retrodiction. These interpretations show that the observational entropy implicitly includes a uniform reference prior. Since the uniform prior cannot be used when the system is infinite-dimensional or otherwise energy-constrained, we propose generalizations by replacing the uniform prior with arbitrary quantum states that may not even commute with the state of the system. We propose three candidates for this generalization, discuss their properties, and show that one of them gives a unified expression that relates both interpretations.