Incompatibility of quantum measurements lies at the core of nearly all quantum phenomena, from Heisenberg’s Uncertainty Principle, to the violation of Bell inequalities, and all the way up to quantum computational speed-ups. Historically, quantum incompatibility has been considered only in a qualitative sense. However, recently various resource-theoretic approaches have been proposed that aim to capture incompatibility in an operational and quantitative manner. Previous results in this direction have focused on particular subsets of quantum measurements, leaving large parts of the total picture uncharted.
A work, which I wrote together with Eric Chitambar and Wenbin Zhou and was published yesterday on Physical Review Letters, proposes the first complete solution to this problem by formulating a resource theory of measurement incompatibility that allows free convertibility among all compatible measurements. As a result, we are now able to explain quantum incompatibility in terms of quantum programmability; namely, the ability to switch on the fly between incompatible measurements is seen as a resource. From this perspective, quantum measurement incompatibility is intrinsically a dynamical phenomenon that reveals itself in time as we try to control the system.
Francesco Buscemi is Associate Professor at the Department of Mathematical Informatics of Nagoya University, Japan. His results solved some long-standing open problems in the foundations of quantum physics, using ideas from mathematical statistics and information theory. He established, in a series of single-authored papers, the theory of quantum statistical morphisms and quantum statistical comparison, generalizing to the noncommutative setting some fundamental results in mathematical statistics dating back to works of David Blackwell and Lucien Le Cam. In particular, Prof. Buscemi successfully applied his theory to construct the framework of “semiquantum nonlocal games,” which extend Bell tests and are now widely used in theory and experiments to certify, in a measurement device-independent way, the presence of non-classical correlations in space and time.
In such an occasion, it is impossible not to remember Professor Paul Busch, gentleman scientist, President of IQSA until his sudden death, of which I learned almost simultaneously with my award.
An important consequence of special relativity, in particular, of the constant and finite speed of light, is that space-like separated regions in spacetime cannot communicate. This fact is often referred to as the “no-signaling principle” in physics.
However, even when signaling is in fact possible, there still are obvious constraints on how signaling can occur: for example, by sending one physical bit, no more than one bit of information can be communicated; by sending two physical bits, no more than two bits of information can be communicated; and so on. Such extra constraints, that by analogy we call “no-hypersignaling,” are not dictated by special relativity, but by the physical theory describing the system being transmitted. If the physical bit is described by classical theory, then the no-hypersignaling principle is true by definition. It is not so in quantum theory, where the validity of the no-hypersignaling principle becomes a non-trivial mathematical theorem relying on a recent result by Péter E. Frenkel and Mihály Weiner (whose proof, using the “supply-demand theorem” for bipartite graphs, is very interesting in itself).
As one may suspect, the no-hypersignaling principle does not hold in general: it is possible to construct artificial worlds in which the no-hypersignaling principle is violated. Such worlds are close relatives of the “box world,” a toy-model theory used to describe conceptual devices called Popescu-Rohrlich boxes. Exploring such alternative box worlds, one further discovers that the no-hypersignaling principle is logically independent of both the conventional no-signaling principle and the information causality principle, however related these two may seem to be with no-hypersignaling.
This means that the no-hypersignaling principle needs to be either assumed from the start, or derived from presently unknown physical principles analogous to the finite and constant speed of light behind Einstein’s no-signaling principle.
Next Wednesday, I will be giving an invited lecture at the National Cheng Kung University in Tainan, Taiwan, about all that I’ve learnt concerning the information-disturbance tradeoff in quantum theory. Keeping a unified viewpoint, I will cover many aspects of the problem: from the difference between physical and stochastic reversibility, to qualitative “no information without disturbance” statements and quantitative balance equations, up to the two-observable approach à la Heisenberg.
I recently gave a colloquium at the Department of Applied Mathematics of Hanyang University in Ansan, Korea, in which I tried to introduce the idea of incompatibility of quantum measurements to students that were not all perfectly fluent in quantum theory.
Incompatibility, in the form of uncertainty relations, is available in many flavours: statistical and dynamical, variance-based and entropy-based, state-dependent and state-independent… As I was asked to share the slides, I’m now making them publicly available (click on the cover below):
In 1905, the American economist Max Lorenz introduced a way to graphically represent the concentration of wealth distribution in a country, what is now known as the country’s Lorenz curve. Since then, Lorenz curves have found uncountable applications in a wide range of quantitative sciences, ranging from mathematical statistics to biology and finance. Whenever discrete distributions (including not only probability distributions, but also assets portfolios or biodiversity indicators) appear in the modeling of a problem, Lorenz curves and related ideas such as majorization are likely to play a crucial role.
If wealth were quantum, how would you measure wealth concentration?
Quantum theory deals with objects, quantum states, that from many viewpoints resemble discrete distributions, but with the crucial difference of being non-commutative. In a paper published few days ago on Physical Review A, Gilad Gour and I generalize the definition of Lorenz curves to arbitrary pairs of quantum states, reconstructing the classical theory of majorization in the case commuting states, and discussing applications of this new tool in the emerging fields of quantum thermodynamics and quantum resource theories.
A tool that we introduce (and that may potentially be of general interest) is the family of divergences (for varying between 1 and ) that we name “Hilbert α-divergences” due to their close kinship with Hilbert’s metric, and which interpolate between the trace-distance and the max-relative entropy .
F. Buscemi and G. Gour, Quantum Relative Lorenz Curves. Physical Review A, vol. 95, 012110 (2017). The paper is available in its journal version (paywall) and on the arXiv (free).
The information-theoretic formulation of Heisenberg’s Uncertainty Principle that Michael J.W. Hall (Griffith U, Brisbane), Masanao Ozawa (Nagoya U), Mark M. Wilde (Louisiana State U, Baton Rouge), and I formulated a while ago, has been experimentally tested and verified by Georg Sulyok, Stephan Sponar, Demirel Bülent, and Yuji Hasegawa (all at the Vienna Atomistitut) using a very precise measurement on the neutrons emitted by their research nuclear reactor (a TRIGA Mark II). The results have been published in Physical Review Letters, as an Editors’ Suggestion (the paper is also freely available on the arXiv).
Heisenberg (left), the father of the uncertainty principle, meets Shannon (right), the father of information theory, above the core of a TRIGA research reactor. The blue glow is caused by Cherenkov radiation. (Source: Wikipedia)
Heisenberg’s Uncertainty Principles
Heisenberg’s Uncertainty Principle (HUP) is often summarized as the statement that any act of measurement inevitably causes uncontrollable disturbance on the measured system. Put in a more spectacular way, HUP would dictate that we can learn about the present, but at the cost of being unable to fully predict the future. In fact, Heisenberg, in his original paper, never claimed such a generally valid, all-encompassing statement. Instead, his intention was to construct a physically plausible (for the scientific community of that time, 1927) scenario, in which the mathematical property of non-commutativity of quantum observables would have measurable consequences.
We can learn about the present, but at the cost of being unable to fully predict the future.
I think it is fair to put Heisenberg’s original work into perspective: though rigorous (at least for the standards of that time), it without doubt relies on over-idealized measurement models, like the famous and much-debated gamma-ray microscope thought experiment for the measurement of the position of an electron by photon scattering. This, of course, can hardly lead to any statement of general validity, and I believe that neither Heisenberg nor his contemporaries would have thought otherwise.
How to tame the general case then? Starting from the axioms of quantum theory (those about states, observables, and the ‘Born rule’) and proceeding in a purely geometric way, Robertson derived, in 1929, a relation that is usually presented as the mathematical formulation of HUP, namely,
where and are the mean-square deviations of the two observables in state .
At this point, however, the orthodox textbook (like, for example, the still nowadays excellent Nielsen&Chuang) will rightly notice that Robertson’s relation has nothing to do with a noise-disturbance relation: and cannot be interpreted as measures of ‘accuracy with which A is measured’ and ‘disturbance caused on the value of observable B’ without soon running into some sort of nonsense. The correct interpretation is the following: suppose that we have a very large number of particles, all in the same state , and that we measure on half of them and on the remaining half; we would then observe that the statistical data of the measurement outcomes would obey Robertson’s inequality. Since no mention is made of the state of the particles after the measurements, it is clear that Robertson’s relation is surely not about the disturbance caused by the act of measurement, but rather about the limitations that quantum theory poses on the preparation of quantum states, that cannot be simultaneously sharp with respect to two incompatible observables.
It hence seems clear to me that we are dealing with two uncertainty principles:
a static uncertainty principle, namely, Robertson’s inequality, and
a dynamical uncertainty principle, namely, a statement that should establish a tradeoff between the accuracy with which an observable (A) is measured and the disturbance consequently caused on another non-commuting observable (B).
Should one then give up with the search for a noise-disturbance relation à la Heisenberg, i.e., involving mean-square deviations and commutators? The answer is no: as Masanao Ozawa showed some time ago, with the careful definitions of a ‘noise operator’ and a ‘disturbance operator,’ it is indeed possible to generalize Robertson’s relation, turning it into a tradeoff relation between accuracy (with which one observable, A, is measured) and disturbance (that said measurement introduces in the other observable, B). There have been some (hot) debate on this particular approach, but this would take us too far.
The Information-Theoretic Formulation
Another static uncertainty principle is that discovered by Hans Maassen and Jos Uffink (in 1988), generalizing a proposal first made by David Deutsch (in 1983). Their relation looks like this:
where and denote the entropies of the statistical distributions of the outcomes of the measurements of A and B, and is a number that is strictly positive whenever A and B are incompatible. Whenever this is the case, the Deutsch-Maassen-Uffink relation prevents and from being both null at the same time.
The entropic formulation of the uncertainty principle has some features making it preferable, in some situations, to the usual formulation in terms of mean-square deviations. The two main reasons are the following:
the lower bound does not depend on the state of the system being measured, while the lower bound in Robertson’s inequality becomes trivial whenever is, for example, an eigenstate of either A or B;
the entropies and do not depend on the numerical value of the possible outcomes (i.e., the eigenvalues of A and B) but only on their statistical distribution; on the contrary, the mean-square deviations, and , do depend on the numerical value of the eigenvalues of the two observables (for example, a simple relabeling of outcomes can lead to very different values for the mean-square deviations).
Even though the entropic formulation of the uncertainty principle is quite different from the traditional one given in terms of mean-square deviations, it falls in the same category, in the sense that it only captures the ‘static’ part of Heisenberg’s principle. Indeed, the Deutsch-Maassen-Uffink relation refers to the outcome statistics collected from many, independent measurements of observables A and B on a very large number of particles all prepared in the same state, but the states of the particles after the measurements never enter the analysis.
Again, one may wonder whether it is possible to prove an entropic tradeoff relation that captures the dynamical uncertainty principle. Indeed it is possible to do so, and we did that in a recent collaboration. Our formula looks as follows:
where
measures the noise with which the measuring apparatus measures the observable A,
measures the disturbance that the measuring apparatus causes on the value of the observable B, and
is the same number appearing also in the Maassen-Uffink relation (it is hence strictly positive whenever A and B are incompatible).
In information-theoretic terms (“as Shannon would say“) the above relation essentially describes the tradeoff between knowledge about A and predictability of B. It thus proves the statement that I wrote at the beginning, namely:
we can learn about the present (the value of A), but at the cost of being unable to fully predict the future (the value of B).
As everyone knows, when two objects at different temperatures get in contact, heat will flow from the hotter to the colder object, until temperatures equilibrate. This fact constitutes the second law of thermodynamics. The same happens also for information: it can only go from the `informed’ party (i.e., where the information is stored) to the `uninformed’ one. This intuition can be formalized as a data-processing principle.
The above arguments hide, however, an implicit assumption — that the two objects (or information carriers) never met in the past, i.e., are uncorrelated. Indeed, in the presence of initial correlations, anomalous backward flows of heat/information have been predicted and observed, in violation of the data-processing principle.
However, not all correlations enable such anomalous flows. For example, purely classical correlations do not have such ability. Hence the question naturally arises: which correlations allow to break the data-processing principle?
In this paper I present a general characterization of such correlations from an information-theoretic viewpoint. The main discovery is that the situation is much richer than previously thought: not only the quality but also the quantity of correlations matters — the delicate tradeoff between them being given by the condition of complete positivity, a central concept in quantum mechanics.
The hope is that the approach I propose here, unifying a number of previous works and thus simplifying the global picture, will contribute to the understanding of the deep (though, in my opinion, not so straightforward, as claimed somewhere) connections between information theory, quantum theory, and thermodynamics.