Next Wednesday, I will be giving an invited lecture at the National Cheng Kung University in Tainan, Taiwan, about all that I’ve learnt concerning the information-disturbance tradeoff in quantum theory. Keeping a unified viewpoint, I will cover many aspects of the problem: from the difference between physical and stochastic reversibility, to qualitative “no information without disturbance” statements and quantitative balance equations, up to the two-observable approach à la Heisenberg.

I recently gave a colloquium at the Department of Applied Mathematics of Hanyang University in Ansan, Korea, in which I tried to introduce the idea of incompatibility of quantum measurements to students that were not all perfectly fluent in quantum theory.

Incompatibility, in the form of uncertainty relations, is available in many flavours: statistical and dynamical, variance-based and entropy-based, state-dependent and state-independent… As I was asked to share the slides, I’m now making them publicly available (click on the cover below):

I was invited to give a seminar to the quantum information group there (thank you, Joonwoo!) and a departmental colloquium with the students of applied mathematics. I thoroughly enjoyed every single moment of my visit (even the very hot floor in my apartment…): good friends, good discussions and (evviva!) good food.

In 1905, the American economist Max Lorenz introduced a way to graphically represent the concentration of wealth distribution in a country, what is now known as the country’s Lorenz curve. Since then, Lorenz curves have found uncountable applications in a wide range of quantitative sciences, ranging from mathematical statistics to biology and finance. Whenever discrete distributions (including not only probability distributions, but also assets portfolios or biodiversity indicators) appear in the modeling of a problem, Lorenz curves and related ideas such as majorization are likely to play a crucial role.

If wealth were quantum, how would you measure wealth concentration?

Quantum theory deals with objects, quantum states, that from many viewpoints resemble discrete distributions, but with the crucial difference of being non-commutative. In a paper published few days ago on Physical Review A, Gilad Gour and I generalize the definition of Lorenz curves to arbitrary pairs of quantum states, reconstructing the classical theory of majorization in the case commuting states, and discussing applications of this new tool in the emerging fields of quantum thermodynamics and quantum resource theories.

A tool that we introduce (and that may potentially be of general interest) is the family of divergences (for varying between 1 and ) that we name “Hilbert α-divergences” due to their close kinship with Hilbert’s metric, and which interpolate between the trace-distance and the max-relative entropy .

F. Buscemi and G. Gour, Quantum Relative Lorenz Curves. Physical Review A, vol. 95, 012110 (2017). The paper is available in its journal version (paywall) and on the arXiv (free).

The information-theoretic formulation of Heisenberg’s Uncertainty Principle that Michael J.W. Hall (Griffith U, Brisbane), Masanao Ozawa (Nagoya U), Mark M. Wilde (Louisiana State U, Baton Rouge), and I formulated a while ago, has been experimentally tested and verified by Georg Sulyok, Stephan Sponar, Demirel Bülent, and Yuji Hasegawa (all at the Vienna Atomistitut) using a very precise measurement on the neutrons emitted by their research nuclear reactor (a TRIGA Mark II). The results have been published in Physical Review Letters, as an Editors’ Suggestion (the paper is also freely available on the arXiv).

Heisenberg’s Uncertainty Principles

Heisenberg’s Uncertainty Principle (HUP) is often summarized as the statement that any act of measurement inevitably causes uncontrollable disturbance on the measured system. Put in a more spectacular way, HUP would dictate that we can learn about the present, but at the cost of being unable to fully predict the future. In fact, Heisenberg, in his original paper, never claimed such a generally valid, all-encompassing statement. Instead, his intention was to construct a physically plausible (for the scientific community of that time, 1927) scenario, in which the mathematical property of non-commutativity of quantum observables would have measurable consequences.

We can learn about the present, but at the cost of being unable to fully predict the future.

I think it is fair to put Heisenberg’s original work into perspective: though rigorous (at least for the standards of that time), it without doubt relies on over-idealized measurement models, like the famous and much-debated gamma-ray microscope thought experiment for the measurement of the position of an electron by photon scattering. This, of course, can hardly lead to any statement of general validity, and I believe that neither Heisenberg nor his contemporaries would have thought otherwise.

How to tame the general case then? Starting from the axioms of quantum theory (those about states, observables, and the ‘Born rule’) and proceeding in a purely geometric way, Robertson derived, in 1929, a relation that is usually presented as the mathematical formulation of HUP, namely,

where and are the mean-square deviations of the two observables in state .

At this point, however, the orthodox textbook (like, for example, the still nowadays excellent Nielsen&Chuang) will rightly notice that Robertson’s relation has nothing to do with a noise-disturbance relation: and cannot be interpreted as measures of ‘accuracy with which A is measured’ and ‘disturbance caused on the value of observable B’ without soon running into some sort of nonsense. The correct interpretation is the following: suppose that we have a very large number of particles, all in the same state , and that we measure on half of them and on the remaining half; we would then observe that the statistical data of the measurement outcomes would obey Robertson’s inequality. Since no mention is made of the state of the particles after the measurements, it is clear that Robertson’s relation is surely not about the disturbance caused by the act of measurement, but rather about the limitations that quantum theory poses on the preparation of quantum states, that cannot be simultaneously sharp with respect to two incompatible observables.

It hence seems clear to me that we are dealing with two uncertainty principles:

a static uncertainty principle, namely, Robertson’s inequality, and

a dynamical uncertainty principle, namely, a statement that should establish a tradeoff between the accuracy with which an observable (A) is measured and the disturbance consequently caused on another non-commuting observable (B).

Should one then give up with the search for a noise-disturbance relation à la Heisenberg, i.e., involving mean-square deviations and commutators? The answer is no: as Masanao Ozawa showed some time ago, with the careful definitions of a ‘noise operator’ and a ‘disturbance operator,’ it is indeed possible to generalize Robertson’s relation, turning it into a tradeoff relation between accuracy (with which one observable, A, is measured) and disturbance (that said measurement introduces in the other observable, B). There have been some (hot) debate on this particular approach, but this would take us too far.

The Information-Theoretic Formulation

Another static uncertainty principle is that discovered by Hans Maassen and Jos Uffink (in 1988), generalizing a proposal first made by David Deutsch (in 1983). Their relation looks like this:

where and denote the entropies of the statistical distributions of the outcomes of the measurements of A and B, and is a number that is strictly positive whenever A and B are incompatible. Whenever this is the case, the Deutsch-Maassen-Uffink relation prevents and from being both null at the same time.

The entropic formulation of the uncertainty principle has some features making it preferable, in some situations, to the usual formulation in terms of mean-square deviations. The two main reasons are the following:

the lower bound does not depend on the state of the system being measured, while the lower bound in Robertson’s inequality becomes trivial whenever is, for example, an eigenstate of either A or B;

the entropies and do not depend on the numerical value of the possible outcomes (i.e., the eigenvalues of A and B) but only on their statistical distribution; on the contrary, the mean-square deviations, and , do depend on the numerical value of the eigenvalues of the two observables (for example, a simple relabeling of outcomes can lead to very different values for the mean-square deviations).

Even though the entropic formulation of the uncertainty principle is quite different from the traditional one given in terms of mean-square deviations, it falls in the same category, in the sense that it only captures the ‘static’ part of Heisenberg’s principle. Indeed, the Deutsch-Maassen-Uffink relation refers to the outcome statistics collected from many, independent measurements of observables A and B on a very large number of particles all prepared in the same state, but the states of the particles after the measurements never enter the analysis.

Again, one may wonder whether it is possible to prove an entropic tradeoff relation that captures the dynamical uncertainty principle. Indeed it is possible to do so, and we did that in a recent collaboration. Our formula looks as follows:

where

measures the noise with which the measuring apparatus measures the observable A,

measures the disturbance that the measuring apparatus causes on the value of the observable B, and

is the same number appearing also in the Maassen-Uffink relation (it is hence strictly positive whenever A and B are incompatible).

In information-theoretic terms (“as Shannon would say“) the above relation essentially describes the tradeoff between knowledge about A and predictability of B. It thus proves the statement that I wrote at the beginning, namely:

we can learn about the present (the value of A), but at the cost of being unable to fully predict the future (the value of B).