Graduate States & Observables: Are They Really Different?

Click For Summary
In quantum theory, states and observables are typically regarded as distinct entities, despite both being represented by Hermitian operators. The density matrix, representing a quantum state, should not be considered an observable because it lacks the same foundational structure as physical observables. Observables are measurable quantities that emerge from interactions with measurement devices, while states describe preparation procedures. The discussion emphasizes that the physical meanings of states and observables differ, with states being linked to initial conditions and observables to measurable outcomes. Ultimately, the thread explores the nuanced relationship between states and observables, reinforcing their fundamental differences in quantum mechanics.
  • #31
Morbert said:
What would the spectral decomposition of this matrix be? I.e. Could you write it as S^=∑S|S⟩⟨S|?
Say entropy is an observable in QM, in general for the set of same states, observed value differs in each observation, so
\sigma_S^2=<S^2>-<S>^2 \neq 0
What are the eigenstates of entropy? I do not know whether e.g., superpositon of systems of T=0, T_1, T_2 are valid statement or not.
 
Physics news on Phys.org
  • #32
Here's my crude attempt at framing Shannon entropy as an observable to be measured, motivated by this paper. Consider first an experiment to measure the observable ##A## of a system with state space ##\mathcal{H}## prepared in the state ##\rho##. What we actually want to measure is not ##A##, but the Shannon entropy of this experiment.

First we model multiple experimental runs by creating a new state space that is a product of ##N## copies of the original state space $$\mathcal{H'} = \mathcal{H}\otimes\mathcal{H}\otimes\mathcal{H}\otimes\dots$$This allows us to frame the relative frequencies of multiple experimental runs as a single observable, such that a single outcome is an observed set of relative frequencies ##\{f_i\}## with probability $$p(\{f_i\}) = \mathrm{tr}\rho'\Pi_{\{f_i\}}$$ For a standard experiment, and for a large enough ##N##, the probability for a particular set of relative frequencies should be close to 1. The relative frequencies are the observed frequencies, and will approximate the set of probabilities ##\{p_i\}## determined by ##\rho## and ##A##. You can then compute Shannon entropy the usual way from these probabilities.

To emphasise: Shannon entropy seems to be more an objective property of the experiment than a property of the microscopic system to be measured.
 
Last edited:
  • Like
Likes gentzen, vanhees71, anuttarasammyak and 1 other person
  • #33
Formally entropy is not an observable. This is clear from the formal definition a la von Neumann,
$$S=-\mathrm{Tr}(\hat{\rho} \ln \hat{\rho}),$$
i.e., it is a property of the state rather than a state-independent observable.

Also the above assertions concerning thermodynamic entropy don't show that you can "measure" entropy dircectly. What's quoted from Wikipedia above, i.e., measuring entropy differences by measuring ##\mathrm{d} S=-\delta Q/T##, where ##Q## is the heat energy in the system, is indeed only indirect and only valid for the special case of equilibrium.
 
  • Like
Likes LittleSchwinger

Similar threads

  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 175 ·
6
Replies
175
Views
12K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 32 ·
2
Replies
32
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 42 ·
2
Replies
42
Views
6K
  • · Replies 14 ·
Replies
14
Views
3K