Conservation of Quantum Information

In summary, conservation of quantum information is not derived from Noether's theorem for classical physics, but rather from the corresponding conserved classical quantity, the Gibbs entropy, which is a consequence of the Liouville theorem. The symmetry associated with an arbitrary Hamiltonian is the symplectic symmetry, which leads to the conjecture that conservation of entropy could be related to this symmetry. However, entropy is not an observable in the same sense as other observables, and its behavior under time evolution is different.
  • #1
nateHI
146
4
TL;DR Summary
An effort to understand conservation of quantum information by understanding the associated (Noether) symmetry.
I can't seem to wrap my head around the notion of conservation of quantum information. One thing that might help that is if someone can tell me what the associated symmetry is. For example, phase symmetry leads to conservation of electric charge according to Noether's theorem; a fact that helped me build some intuition. I would like to do the same for conservation of quantum information
 
  • Like
Likes Demystifier
Physics news on Phys.org
  • #2
I think it could be somehow related to the symplectic symmetry.

Let me explain. The Noether theorem is a really a theorem for classical physics, not quantum physics. The conservation of charge, energy, momentum etc. is derived from symmetry of the classical Lagrangian, and then one hopes that the corresponding quantum operator will be conserved too (which sometimes isn't the case, due to anomalies). So to understand conservation of quantum information, one has to look for the corresponding conserved classical quantity. The quantum quantity conserved here is the von Neumann entropy, and its classical analog is the Gibbs entropy. The Gibbs entropy is given by the formula
$$S=-\int dpdq \, f(p,q) {\rm ln}f(p,q)$$
where the integral is taken over all phase space and ##f(p,q)## is a probability density in a classical statistical ensemble. The conservation of this entropy is a consequence of the Liouville theorem, which in turn is a consequence of classical Hamiltonian equations of motion. The symmetry associated with an arbitrary Hamiltonian is the symmetry of an arbitrary phase space, which is the symplectic symmetry. This leads me to the conjecture that conservation on entropy (either Gibbs or von Neumann) could somehow be related to the symplectic symmetry. I don't know how to make it precise (maybe someone else does?), but in the quantum case perhaps it could be done somehow with the aid of Weyl quantization (quantization in phase space).
 
Last edited:
  • Like
Likes vanhees71
  • #3
But entropy is not conserved - it tends to grow.

If we reverse time, then entropy tends to decrease.

I do not think we can derive the behavior of entropy from a Noether type symmetry argument. Why would it behave differently when we reverse time?
 
  • #4
Heikki Tuuri said:
But entropy is not conserved - it tends to grow.
Coarse grained entropy grows. Fine grained entropy doesn't.
 
  • Like
Likes vanhees71 and Elias1960
  • #5
nateHI said:
For example, phase symmetry leads to conservation of electric charge according to Noether's theorem; a fact that helped me build some intuition. I would like to do the same for conservation of quantum information
''quantum information'' is an ill-defined term, a very loose notion.

Maybe you mean that von Neumann entropy is conserved under unitary evolution of the state. But this has nothing to do with Noether's theorem for generators of finite-dimensional symmetry groups. The symmetry is the infinite-dimensional unitary invariance of the defining expression.

Demystifier said:
This leads me to the conjecture that conservation on entropy (either Gibbs or von Neumann) could somehow be related to the symplectic symmetry.
No.

For any function ##f##, ##Tr~f(\rho)## is unitarily invariant. Von Neumann entropy has this form, hence is invariant, too. But ##Tr~\rho^2## is also unitarily invariant, and many other choices of ##f## are possible...
 
Last edited:
  • Like
Likes vanhees71, Heikki Tuuri and Prathyush
  • #6
Also note that entropy of a system is not an observable. In the standard formulation, Noether theorem applies to symmetries of observables and their associated conserved currents.
 
  • Like
Likes vanhees71
  • #7
A. Neumaier said:
''quantum information'' is an ill-defined term, a very loose notion.

Maybe you mean that von Neumann entropy is conserved under unitary evolution of the state. But this has nothing to do with Noether's theorem for generators of finite-dimensional symmetry groups. The symmetry is the infinite-dimensional unitary invariance of the defining expression.
So "quantum information" is a bit of a pop science buzz word it sounds like. It's unfortunate that I've been trying to puzzle it out then.

However, some good came from this discussion and I think you all for your time. Specifically, the mention of Weyl quantization (by others) and symmetry groups reminded me a lot of topological quantum field theory which links representations of quantum groups to the topology of 3-manifolds. Anyway, I'll stop writing before I embarrass myself and maybe just attempt to puzzle this out some more.
 
  • #8
Prathyush said:
Also note that entropy of a system is not an observable.
Why is it not an observable? Entropy is the average value of the hermitian operator ##A=-{\rm ln}\rho##, so in this sense it is an observable. In what sense is it not an observable?
 
  • #9
Demystifier said:
Why is it not an observable? Entropy is the average value of the hermitian operator ##A=-{\rm ln}\rho##, so in this sense it is an observable. In what sense is it not an observable?
##A## is a good observable if and only if ##\rho## is positive definite (i.e., not always, for example not for a pure state). Moreover, compared to other observables, it transforms differently under time evolution.
 
  • #10
A. Neumaier said:
##A## is a good observable if and only if ##\rho## is positive definite (i.e., not always, for example not for a pure state).
I don't understand, are you saying that ##\rho## is not positive definite for a pure state?
 
  • #11
https://en.wikipedia.org/wiki/Von_Neumann_entropy

What would it mean for a density matrix to be an observable? A density matrix contains classical probabilities.

If we have a 0.3 classical probability that a specific coin on a table is heads and 0.7 for tails, how would we measure that?

Or how would we measure that the von Neumann entropy of that single coin is, say 0.1?
 
  • #12
Heikki Tuuri said:
If we have a 0.3 classical probability that a specific coin on a table is heads and 0.7 for tails, how would we measure that?
if we have not one coin but, say, 1.000.000 coins, then we will have approximately 300.000 heads and 700.000 tails. That's measurable.
 
  • #13
Humm... if I measure the spin of an elecron and get "up", then I know that the spin is up.

But if I measure 1,000,000 coins on a table and find 300,123 tails, then I know that probably the classical probability is close to 0.3.

Clearly, these things are not "observable" in the same sense.
 
  • #14
Demystifier said:
I don't understand, are you saying that ##\rho## is not positive definite for a pure state?
Yes! For a pure state, it is only positive semidefinite. ##A## is undefined in this case since the logarithm has a singularity at zero.
 
  • #15
Heikki Tuuri said:
if I measure 1,000,000 coins on a table and find 300,123 tails, then I know that probably the classical probability is close to 0.3.
If you measure the length of a table and get 79cm, then you know that probably the length is close to 79cm.
(You could have made a systematic error, hence there is no certainty.)
 
  • #16
If I use an automatic machine to make a new table, then I can measure that single table and find that its length is 79.0 cm +-0.5 cm.

But I cannot measure the probability distribution of the tables made by the machine without a large sample of tables.

Alternatively, I might measure the machine and calculate from the measurements what is the probability distribution of the tables it will output.

The length is an observable of a single table, and the distribution is an observable of the preparing apparatus which produces tables.
 
  • #17
Demystifier said:
Why is it not an observable? Entropy is the average value of the hermitian operator A=−lnρA=-{\rm ln}\rho, so in this sense it is an observable. In what sense is it not an observable?

Observables are Hermitian operators on the Hilbert Space.

For example ##\hat A = a_i \mid a_i \rangle \langle a_i \mid##

It does not make any sense if ##\hat A## depended on the state of the system.

Sure it is possible to construct an observable ##\hat A = -\log \hat \rho## given the knowledge of the density matrix, however it would be a different observable if ##\hat \rho## was different.
 
  • Like
Likes Demystifier
  • #18
Prathyush said:
Observables are Hermitian operators on the Hilbert Space.

For example ##\hat A = a_i \mid a_i \rangle \langle a_i \mid##

It does not make any sense if ##\hat A## depended on the state of the system.

Sure it is possible to construct an observable ##\hat A = -\log \hat \rho## given the knowledge of the density matrix, however it would be a different observable if ##\hat \rho## was different.
OK, that argument makes sense. But then again, consider thermal entropy. In this case
$$\rho=\frac{e^{-\beta H}}{{\rm Tr}\,e^{-\beta H}}$$
where ##H## is the Hamiltonian, which is a perfectly legitimate observable. From that point of view, thermal entropy looks like an observable.
 
  • #19
Demystifier said:
OK, that argument makes sense. But then again, consider thermal entropy. In this case
$$\rho=\frac{e^{-\beta H}}{{\rm Tr}\,e^{-\beta H}}$$
where ##H## is the Hamiltonian, which is a perfectly legitimate observable. From that point of view, thermal entropy looks like an observable.

No Entropy is not an observable, It is certainly possible do experiments where you can measure S if you have a large number of collection of identically prepared system.

But it is also possible to determine the matrix elements of ##\hat \rho##. One might be tempted to say the matrix elements are also observables, but that would be incorrect.

for a thermal system he fact that you can measure S by measuring E is just a reflection of ##F = E - TS##. Still it does not motivate defining an entropy as an observable, for reasons I mentioned in #17
 
  • Like
Likes Demystifier
  • #20
Prathyush said:
No Entropy is not an observable, It is certainly possible do experiments where you can measure S if you have a large number of collection of identically prepared system.

But it is also possible to determine the matrix elements of ##\hat \rho##. One might be tempted to say the matrix elements are also observables, but that would be incorrect.

for a thermal system he fact that you can measure S by measuring E is just a reflection of ##F = E - TS##. Still it does not motivate defining an entropy as an observable, for reasons I mentioned in #17
OK, now I agree that entropy is not an observable, even though it can be measured in a large ensemble of identically prepared systems. But is there a general name for all such things that can be measured? Is entropy a POVM, or do we need a more general concept to embrace all quantities that can be measured?
 
  • #21
Demystifier said:
OK, now I agree that entropy is not an observable, even though it can be measured in a large ensemble of identically prepared systems. But is there a general name for all such things that can be measured? Is entropy a POVM, or do we need a more general concept to embrace all quantities that can be measured?
There are lots of things that are observable in the experimental sense though they are not (or only loosely related to) ''observables'' in the sense of Born's rule or its generalization to POVMs. For example, thermodynamical observables such as temperature or chemical potential, life times in nuclear physics, cross sections in scattering experiments, ... There is always some connection through theory, but of quite different kinds.
 
  • #22
Is Pratyush referring to the well-known fact of probability theory that if we do not know anything about possible probabilities BEFORE an experiment, then we cannot calculate a new probability estimate with Bayesian reasoning AFTER the experiment?

An example: we pick by random a ball from a large sack of balls, and 1,000 times get a black ball. Estimate the probability that black balls are less than 50% of all balls in the sack?

If we do not know anything about the probabilities before the test, we cannot calculate any estimate for the proportion of black balls.

Similarly, if we do not know anything about how electron spins were prepared, and measure the first 1,000 spins up, we cannot say anything about the spin density matrix.

We must know something about the preparing apparatus. Then we can refine our estimate of the density matrix, based on individual prepared samples that we measure. In this sense, the density matrix really is an observable of the preparing apparatus.
 
  • Like
Likes dextercioby
  • #23
Heikki Tuuri said:
if we do not know anything about how electron spins were prepared, and measure the first 1,000 spins up, we cannot say anything about the spin density matrix.
This is incorrect under any sensible notion of knowledge. Indeed, we can say that the state approximately equals the spin up state. Note that whatever we do in physics or elsewhere, we always generalize from finitely many observations to approximations of the terms figuring in the theory.
Heikki Tuuri said:
We must know something about the preparing apparatus. Then we can refine our estimate of the density matrix, based on individual prepared samples that we measure.
This wouldn't help your earlier claim since since the preparing apparatus and the means of constructing it
is also known only through finitely many observations.
Heikki Tuuri said:
In this sense, the density matrix really is an observable of the preparing apparatus.
This is indeed the view taken by the thermal interpretation. More precisely, it is a property of the beam produced by the preparing apparatus.
 
  • #24
Heikki Tuuri said:
We must know something about the preparing apparatus. Then we can refine our estimate of the density matrix, based on individual prepared samples that we measure. In this sense, the density matrix really is an observable of the preparing apparatus.
Chapter 9 of Peres's Quantum Concepts and Methods might be a useful to read. Especially 9-7 where he echoes what you say:
Peres said:
The notions of truth and falsehood acquire new meanings in the logic of quantum phenomena. It is in principle impossible to establish by a single test the veracity of a statement about a quantum preparation. You can increase the confidence level by testing more than one neutron, but this, in turn, depends on your willingness to rely on the uniformity of the neutrons preparations. This issue itself is amenable to a test, but only if other suitable assumptions are made. In general, the residual entropy (i.e., the uncertainty) left after a quantum test depends on the amount and type of information that was available before the test
 
  • Like
Likes vanhees71 and Heikki Tuuri
  • #25
Heikki Tuuri said:
We must know something about the preparing apparatus. Then we can refine our estimate of the density matrix, based on individual prepared samples that we measure. In this sense, the density matrix really is an observable of the preparing apparatus.
DarMM said:
Chapter 9 of Peres's Quantum Concepts and Methods might be a useful to read. Especially 9-7 where he echoes what you say:
But this is the case also for classical probabilistic phenomena. We can never determine with certainty the distribution of any stochastic source, no matter whether the latter is modeled by classical mechanics or by quantum mechanics!

On the other hand, like with any quantity that physicists measure, we can obtain with very high confidence highly accurate estimates of probabilities or probability distributions if we are willing to spend enough effort. In this respect there is no difference between the distance of a galaxy to us, say, and a probability.
 
  • Like
Likes vanhees71
  • #26
A. Neumaier said:
But this is the case also for classical probabilistic phenomena. We can never determine with certainty the distribution of any stochastic source, no matter whether the latter is modeled by classical mechanics or by quantum mechanics!
Peres mentions this as well:
Peres said:
This is also true in classical information theory (since ##I_{a v}## depends on the a priori probabilities ##p_i## ) but the effect is more striking for quantum information which can be supplied in subtler ways.
The "subtler ways" are the concern of the rest of Chapter 9 which @Heikki Tuuri may find interesting. If you do @Heikki Tuuri I'd recommend giving Barnett's textbook in the OUP Master's series "Quantum Information" a read as well.
 
  • Like
Likes vanhees71, Heikki Tuuri and dextercioby
  • #27
Demystifier said:
I don't understand, are you saying that ##\rho## is not positive definite for a pure state?
Of course not. For a pure state ##\hat{\rho}=|\psi \rangle \langle \psi|##, i.e., it has one eigenvector with a positive eigenvalue 1, namely ##|\psi \rangle##. All other eigenvalues are 0!
 

What is conservation of quantum information?

The conservation of quantum information is a principle in quantum mechanics that states that the total amount of quantum information in a closed quantum system remains constant over time. This means that the information encoded in the quantum states of a system cannot be created or destroyed, but can only be transformed or transferred to other systems.

Why is conservation of quantum information important?

Conservation of quantum information is important because it is a fundamental principle that governs the behavior of quantum systems. It helps us understand and predict the behavior of quantum systems and is essential for the development of quantum technologies such as quantum computing and quantum communication.

How is conservation of quantum information related to the laws of thermodynamics?

The conservation of quantum information is closely related to the second law of thermodynamics, which states that the total entropy (or disorder) of a closed system always increases over time. This is because entropy is a measure of the amount of information that is lost when a system evolves, and the conservation of quantum information ensures that this loss of information is always balanced.

What are some practical applications of conservation of quantum information?

Conservation of quantum information has many practical applications, particularly in the field of quantum information processing. It is essential for the development of quantum computers, which use the principles of quantum mechanics to perform calculations that are impossible for classical computers. It is also crucial for quantum communication, which allows for secure transmission of information using quantum states.

Are there any exceptions to the conservation of quantum information?

There are some exceptions to the conservation of quantum information, particularly in cases where quantum systems are not completely isolated. For example, when a quantum system interacts with its environment, it can lose information through a process called decoherence. However, in closed quantum systems, the conservation of quantum information holds true.

Similar threads

  • Quantum Physics
Replies
2
Views
1K
  • Special and General Relativity
Replies
7
Views
998
Replies
6
Views
696
Replies
41
Views
2K
  • Quantum Physics
Replies
28
Views
2K
Replies
1
Views
564
Replies
1
Views
1K
  • Quantum Physics
Replies
8
Views
652
  • Quantum Physics
Replies
2
Views
1K
Back
Top