High School Collapse and unitary evolution

Click For Summary
Susskind argues that information cannot be lost in quantum mechanics, emphasizing the importance of unitarity, which he believes is a fundamental law. He suggests that while the collapse interpretation of quantum mechanics implies non-unitary evolution at the moment of measurement, the evolution remains unitary prior to measurement. The discussion also touches on black hole evaporation, which raises questions about non-unitary evolution before measurement occurs. Critics point out that the AdS/CFT correspondence, while promising, remains unproven and may not apply to our universe. Ultimately, the debate centers around the implications of unitarity and the nature of measurement in quantum mechanics.
  • #31
PaleMoon said:
take an electron, you want to observe its spin along some direction. you send it through a stein gerlach apparatus and it inreracts with it. at this level you have measured nothing. you need a screen in front of the possible paths. you need something to encode the output.
so any interaction is not an observation. it has something to do with reversibility and unitarity.

I would say the whole process of "preparation" indeed constitutes an observation history in the generalized sense. Except of course, the formalism to really cast it this was is still searched for.

If you see my abstract hint at the BTSM link above, you see that I argue that any interaction of a compositie system, as observer by O5, can be abstracted as O5 observing other subsystems observing each other.

Now, if O5 is dominant, and effectively is the classical lab frame, the situation is so assymetric that we are allowed to make the split that current QM and QFT builds ont. But this split is problematic as we ponder unification and QG. There is also a parallel to this thread https://www.physicsforums.com/threads/why-higher-category-theory-in-physics-comments.899167/page-2

My point was really that once you think about this, the old information paradox discussions become moot, becauase they mix frameworks that don't belong together and extrapolate things in way that is doubtful.

/Fredrik
 
Physics news on Phys.org
  • #32
PeroK said:
if citizenship is a defining property of a human being, then a human being is not a quantum object

This is not correct.

Consider the parallel argument: we can't "read off" from a particular configuration of chemical elements, that some particular piece of matter is a US citizen. Therefore, US citizens are not made of chemical elements.

Since we know US citizens are humans, and humans are made of chemical elements, there is obviously something wrong with this argument. Your argument for humans not being quantum objects because they are US citizens has the same problem.

PeroK said:
the question is whether a complex system can assume properties not inherent in the underlying atomic configuration. The reductionist position would be that it cannot.

No, that's not the reductionist position. The reductionist position is that, no matter what set of properties a complex system has, the system is still made of the same small set of fundamental objects. Or, to put it another way, you don't need any new laws of physics or any new fundamental constituents of matter to make, say, US citizens, as opposed to, say, rocks. You just need to put together the same fundamental constituents, using the same laws of physics, in different ways.

PeroK said:
does that mean that what someone has written or achieved in life is either not a defining part of them or is inherent in their current atomic structure?

Now you're confusing a system's state at one instant of time with its entire history. Obviously these aren't the same thing, so for clarity you should explicitly say which one you are interested in (do you care that a person is a US citizen at this instant, or are you interested in how they became one?). But that's irrelevant to the question of what the system is made of.
 
  • Like
Likes mattt and bhobba
  • #33
stevendaryl said:
Well, the standard formalism assumes that when someone performs a measurement, he knows what it is that he is measuring, and furthermore that he can recognize a distinct outcome for the measurement. It doesn't really explain these two things, but just assumes them. The founders of the Copenhagen interpretation, Bohr and Heisenberg and those guys, made the distinction between microscopic systems, which are described by quantum mechanics, and measurements/observations, which are (approximately) described by classical mechanics.
Yes, we agree here.

But this idealization/split imo is no longer sound when you start to think about TOE unifucation and QG and information paradoxes.

/Fredrik
 
  • #34
PeterDonis said:
No, that's not the reductionist position. The reductionist position is that, no matter what set of properties a complex system has, the system is still made of the same small set of fundamental objects. Or, to put it another way, you don't need any new laws of physics or any new fundamental constituents of matter to make, say, US citizens, as opposed to, say, rocks. You just need to put together the same fundamental constituents, using the same laws of physics, in different ways.

There is a connection to the critique against reductionist approach any the idea in favour of that any interaction is seen as inside observations but subatomic observer (not physicists of course).

The idea is that when all observers are of low complexity, this puts a computational LIMIT on how "complex interactions" it can encode. This is why at high energy the rules are - from the inside view - BOUND to be simpler and simpler. And themore complex interactions and "new laws" are physically ALLOWED only when temperature drops and comlpexity of the interacting parts increase.

So from the inside perspective, new interactions does emerge as complexity increases, that was physically impossible at lower complexity.

Reductionst try to SAVE this situation by imagiing an external - noninteracting observer - that has infinite encoding capacity. And then one imagines that these laws was always there. Surely this does work up to large energies, as an Earth based lab can probe subatomic scales and "save reductionsm", but only up to a certain scale. Then a new paradigm is needed tio bridge cosmological evolutionary theory with reductionist particle physics.

/Fredrik
 
  • #35
bhobba said:
I think 1. has been solved, at least Schlosshauer thinks it has (I do as well but am not as expert as he is).

I'm unfamiliar with what Schlosshauer has said about it.

I know that decoherence shows that a preferred basis follows from the system/environment split. Once you've made such a split and traced out the environmental degrees of freedom, then you have a density matrix which you can diagonalize to get a preferred basis (the one in which the density matrix is diagonal). But the system/environment split is the part that seems subjective to me.
 
  • #36
Fra said:
new interactions does emerge as complexity increases, that was physically impossible at lower complexity.

It depends on what you mean by "new interactions". There are no "new interactions" going on inside a human being, for example, that aren't explainable in terms of the four fundamental interactions in the Standard Model of particle physics. But you have to do a lot of experimentation and analysis to see that; it's not easily visible on the surface the way it is for subatomic particles in the LHC, for example. That's because a human being is a lot more complex than a subatomic particle. But "a lot more complex" does not mean "made of fundamentally different stuff".

Fra said:
Reductionst try to SAVE this situation by imagiing an external - noninteracting observer - that has infinite encoding capacity.

You're going to have to give some references, because this doesn't look like any kind of reductionism that I've seen.
 
  • #37
stevendaryl said:
But the system/environment split is the part that seems subjective to me.

Yes that is a problem. We have no theorem that says the cut doesn't matter, or if it does the situations where it applies is of no practical importance.

It's part of the issues further research is required.

Thanks
Bill
 
Last edited:
  • #38
i was thinking to such a split when i put the cut between wigner and the particle and next between wigner's friend and wigner+particle.
 
  • #39
bhobba said:
Yes that is a problem. We have no theorem that says the cut doesn't matter, or if it does the situations where it applies is of no practical importance.

It's part of the issues further research is required.

Thanks
Bill

I don't know how it happened, but the line you are quoting is not from a post by @martinbn. It's from me.
 
  • #40
PaleMoon said:
i was thinking to such a split when i put the cut between wigner and the particle and next between wigner's friend and wigner+particle.

Von-Neumann showed it can be placed anywhere - so you are OK. The issue is what you wish to infer from it. Von-Neumann inferred since no place is special you should place it at consciousness. That raises other issues I will not go into here - its a legit interpretation but carries so much baggage (eg is an amoeba a conscious observer, a cat, a dog, or only human beings.; is a Cromagnon a human being for this purpose - there are many others as well especially now we have computers - which of course Von-Neuman helped lay the foundations of) its not popular these days.

These days to avoid such difficulties we place it generally just after decoherence - which is the error in Von-Neumann's reasoning - there is a place that is special - but he didn't know about it at the time. Wigner lived long enough to see its development and when he saw some early papers on it by Zeth did a 180% about face - advocating objective collapse interpretations.

Thanks
Bill
 
  • #41
stevendaryl said:
I don't know how it happened, but the line you are quoting is not from a post by @martinbn. It's from me.

Beats me - but now fixed. Will respond to Martin now.

Thanks
Bill
 
  • #42
martinbn said:
Why is that?

It does under ATTY's view of collapse. To avoid any confusion with terms the mentors (including me) decided to stick to ATTY's view. But best if he explains the detail.

Thanks
Bill
 
  • #43
could you give (one more time?) a link to atty's collapse
thanks
 
  • #44
PeterDonis said:
It depends on what you mean by "new interactions". There are no "new interactions" going on inside a human being, for example, that aren't explainable in terms of the four fundamental interactions in the Standard Model of particle physics. But you have to do a lot of experimentation and analysis to see that; it's not easily visible on the surface the way it is for subatomic particles in the LHC, for example. That's because a human being is a lot more complex than a subatomic particle. But "a lot more complex" does not mean "made of fundamentally different stuff".

You're going to have to give some references, because this doesn't look like any kind of reductionism that I've seen.

With the note that all interactions may be observations I am just suggesting seeing things from a different perspective, and argue that its sound. I commented on this as i personally think it is an important point. So if someone mentions this and wonders if its a silly idea, i will just say that there it at least one more that thinks so :cool: I am not making any other claims at this point.

The perspective i hold is one that takes real predictive power more seriously, by suggesting that the computational constraints are physical constraints, not just practical matter. And also assuming that interaction with the environment and some kind of computational reponse system exists and are a universal trait commong to all systems, humans and electrons.

You will not have much predictive value or explanatory power of a human beeing, starting from particle physics because this strategy will give you a chaotical dynamical system without any predictive power whatsoever. Thus, it is not a fit theory at all, and thus can not be how nature implements this. The idea that it should work in principle, given an infinitely powefuly and high precision computer, is from the survival perspective worthless.

Biological system are extremely robust in a way that seems unreasonable given their complexity. You can not explain the robustness of these systems from a reductionist model of particle physics.

Similary, no-one has yet to find a TOE in the reductionism sense, and "explain" the robustness of low energy physics from first principles. Instead we face unreasonable fine tuning questions.

IMO, these are symptoms of methodological reductionism.

This is what i mean by "noninteracting observer with infinite encoding capcity", something that just COLLECTS and records data, to produce arbitrary amounts of statistis of repetetive processes; from which laws are abduced. This is indeed how the "observer" in human science work, and high energy physics in particular. Its instructive to see how deeply rooted the standard model of particle physics is, in this. I think many phycisits take this for granted and have developed a mindset that makes it very difficult to see other perspectives. For me personally i learned a lot from studying living cells, and trying to understand the behaviour of a cell. One then realizes soon that the evolutionary perspective is the best perspective. Reductionism simply does not work.

And i think when it comes to unification and QG, landscapes etc, we are in a similar situation.

From reducionist view, "effective theories" are approximations and thus "less fundamental". But from the opposite view, the reductionist theories are idealisations that ignores important constraints that can not be dismissed as practical matters.

/Fredrik
 
  • #45
bhobba said:
It does under ATTY's view of collapse. To avoid any confusion with terms the mentors (including me) decided to stick to ATTY's view. But best if he explains the detail.

Thanks
Bill
Is that written somewhere so that the rest of us can see what that view is? Also why? I thought that one strict rule here was that only peer reviewed sources are acceptable. Then why a view expressed on a forum is taken as the rule on terminology when peer reviewed sources, including textbooks, say that at least some ensemble interpretations do not have collapse!
 
  • #46
martinbn said:
Is that written somewhere so that the rest of us can see what that view is? Also why? I thought that one strict rule here was that only peer reviewed sources are acceptable. Then why a view expressed on a forum is taken as the rule on terminology when peer reviewed sources, including textbooks, say that at least some ensemble interpretations do not have collapse!

Perhaps we should first state what is meant by an ensemble interpretation.
 
  • Like
Likes martinbn and PaleMoon
  • #47
stevendaryl said:
Perhaps we should first state what is meant by an ensemble interpretation.
Yes, but it seems that there is an agreed upon view that whatever is meant by an ensemble interpretation it has collapse.
 
  • #48
PaleMoon said:
Statisrical ensemble interpretation does not need collapse and unitarity is safe.

One problem is that his unitarity, and conservation of probability, is dependent on something that breaks down for cosmological models. So the idea that conservation of P is some "univeral law" is to me a ubiquitous fallacy. It is merely an expectation from the Newtonian paradigm that has become habit to extrapolate beyond its rational validity.

This is that statistics can be collected, which presumse that experiments can be repeated, and that all the information can be encoded and processed, and that the "observer" doing this is never saturated or limited in information handling capacity.

The requirements are fulfilled for particle physics, this is what has lead to successful Newtonian schema (what smolin calls it) but not for cosmology, and more importantly, not for ANY theory that is inferred by an "insider observer". Here it all becomes hypotetical imaginary ensembles at best, having no chance to stand up to experimental test.

If we look how scientific progression works, we realize that human "knowledge" of laws of nature has evolved, and continutes to evolve, and this process is bound to be constrained by the relative capacity of Earth based science, vs the vastness and complexity of the universe.

/Fredrik
 
  • #49
Fra said:
You will not have much predictive value or explanatory power of a human beeing, starting from particle physics because this strategy will give you a chaotical dynamical system without any predictive power whatsoever. Thus, it is not a fit theory at all, and thus can not be how nature implements this.

Huh? So "we can't model a human being at the level of subatomic particles" means "human beings are not made of subatomic particles"? That's nonsense. But I can't see any other way of reading this claim.

Fra said:
Biological system are extremely robust in a way that seems unreasonable given their complexity. You can not explain the robustness of these systems from a reductionist model of particle physics.

Same response as above.

Fra said:
no-one has yet to find a TOE in the reductionism sense

Which in no way implies that human beings and other macroscopic objects are not made of the subatomic particles we know of.

Fra said:
This is what i mean by "noninteracting observer with infinite encoding capcity", something that just COLLECTS and records data, to produce arbitrary amounts of statistis of repetetive processes; from which laws are abduced.

Huh? We have constructed our models and the laws they contain from finite amounts of data, not infinite amounts.

Fra said:
the reductionist theories are idealisations that ignores important constraints that can not be dismissed as practical matters.

All models are idealizations. That doesn't mean human beings aren't made of subatomic particles.

As far as I can see, by "reductionism" you don't mean any view that any physicist actually holds, but a straw man "naive" reductionist view that nobody is actually claiming or defending.
 
  • #50
The standard statistical interpretation of quantum mechanics that its predictions apply to statistical ensembles is given in many textbooks, eg. Messiah, Quantum Mechanics, Volume 1, Chapter IV, Section 2.
 
Last edited:
  • #51
PeterDonis said:
Huh? So "we can't model a human being at the level of subatomic particles" means "human beings are not made of subatomic particles"? That's nonsense. But I can't see any other way of reading this claim.
...
Which in no way implies that human beings and other macroscopic objects are not made of the subatomic particles we know of.

Of course we are made of subatomic particles if your smash a human to parts. This is the mechanistic or ontologicsl reductionism. And we know how say any two subatomic paricles interact.

But this is not a viable strategy for one human interacting with other humans. Instead the complex systems develops behaviour that due to chaos can not be inferred from knowledge of interaction of parts. It is not even viable for a genious with the mosy powerful computer om earth.

I talk about methodological reductionsm.
https://en.m.wikipedia.org/wiki/Reductionism

This method of course works - but only up to the point where we hit chaos limit. Here, the viable organisms find NEW more significant interaction rules that fit to the "information processing" resources at hand. And thes resources are physical properties of the observing system. A human can afford to adopt more refleticive behavioural strategies than a cell.

/Fredrik
 
  • #52
atyy said:
The standard statistical interpretation of quantum mechanics that its predictions apply to statistical ensembles is given in many textbooks, eg. Messiah, Quantum Mechanics, Volume 1, Chapter IV, Section 2.
And many of those textbooks do not include collapse in the interpretation. So, can you write down what your view is (bhobba alluded to it but I haven't seen it) and explain why you insist on collapse being part of this type of interpretations? Can you also narrow down the reference to Messiah? I couldn't find anything about collapse in that section?
 
  • #53
Fra said:
Of course we are made of subatomic particles if your smash a human to parts.

So you're saying we're not made of subatomic particles if we're left intact?

Fra said:
this is not a viable strategy for one human interacting with other humans. Instead the complex systems develops behaviour that due to chaos can not be inferred from knowledge of interaction of parts.

Yes, but that just means we can't model intact humans directly as conglomerations of interacting subatomic particles. It does not mean that intact humans are not made of subatomic particles. The latter is the only claim that "reductionism" is making.
 
  • #54
PeterDonis said:
So you're saying we're not made of subatomic particles if we're left intact?
No, I'm just saying that what complex systems are "really made of" if we take it apart is not the only important thing. To take something apart required less information that putting it back together.

What I focus on is fitness of models/theories in a competitive survival perspective. Theories contritube to real predictability
and a good adaptive learning methodology are likely preseved by nature.

PeterDonis said:
Yes, but that just means we can't model intact humans directly as conglomerations of interacting subatomic particles. It does not mean that intact humans are not made of subatomic particles. The latter is the only claim that "reductionism" is making.

Right. But what we are "made of" in the ontological sense means notthing to me. Its what reactions to perturbation this has, that gives this all meaning. And chaotic predictions have not predictive value.

The question is, which modelling strategy can a given observer(having given resources) adopt to make maximal progress and survive? This is the question i focus in. And here, reductionism seems sterile.

/Fredrik
 
  • #55
Fra said:
I'm just saying that what complex systems are "really made of" if we take it apart is not the only important thing. To take something apart required less information that putting it back together.

And I'm saying that "reductionism" is in no way incompatible with that. I'm a reductionist and I agree with this statement. So, I suspect, do most reductionists.
 
  • #56
martinbn said:
Is that written somewhere so that the rest of us can see what that view is? Also why? I thought that one strict rule here was that only peer reviewed sources are acceptable. Then why a view expressed on a forum is taken as the rule on terminology when peer reviewed sources, including textbooks, say that at least some ensemble interpretations do not have collapse!

Atty has sent me a private message, but I would rather he explain than I repeat the message. There has been long threads where me and ATTY go to and fro about this issue. It''s really just semantics IMHO, and I shouldn't do it - we are here to help people rather than confuse them. So it was agreed to take whatever definition ATTY takes it to be. For the time being just take it as after an observation it is in an eigenstate of the observable.

Since that is part of the formalism it always true - when looking at it in the usual way eg some care is required in MW. In MW overall there is no state change, but in a specific world there is.

Thanks
Bill
 
Last edited:
  • #57
martinbn said:
Yes, but it seems that there is an agreed upon view that whatever is meant by an ensemble interpretation it has collapse.

That one is easy. Its as detailed in Ballentine's text, and proposed by good old Einstein, although in light of later research (eg the Kochen-Specker theorem) it had to be modified a bit. My interpretation is not strictly the ensemble interpretation - I call it the ignorance ensemble because I only apply the ensemble interpretation after de-coherence - so its slightly different.

Interesting thing about Ballentine - he believes de-coherence has nothing to do with interpretation issues - he says its a real and interesting phenomena of course - but of no interpretive value. Interesting isn't it.

Thanks
Bill
 
Last edited:
  • #59
PeterDonis said:
And I'm saying that "reductionism" is in no way incompatible with that. I'm a reductionist and I agree with this statement. So, I suspect, do most reductionists.

Ok, i guess the subtle point i wanted to convey got lost.

I thought it related the information loss paradox issue, because in what i suggest, information is unavoidably observer dependent. And to compare the information possessed by any two observers in some way, you need a third observer etc. There is no "master observer". Its completely democratic. The only difference is, that some observers are bigger and more dominant.

When you wrote this:
PeterDonis said:
...you don't need any new laws of physics or any new fundamental constituents of matter to make, say, US citizens, as opposed to, say, rocks. You just need to put together the same fundamental constituents, using the same laws of physics, in different ways.

In a way i actually to disgree on this. But the reason is subtle but important. It has todo with how you understand the origin and evolving nature of physical law in the first place. If you think that the laws are just eternal truths about nature, then there is not way to make sense of what i suggest indeed.

If you consider instead effective laws, which are physically inferred by a physical observer by means of an actual interaction history, then, for any given observer, the inferred distinguishable laws, from studying particle physics in colliders, will come from a different windows in theory space, and one that will not contain the same information as if you inferred optimal laws from the hole human, if the same observer did social interaction experiments. And if you take this few, these inferreed effective laws are epistemological more "real" and fundamental, than are the idea of eternal timeless laws. The only painful insight is that laws are actually evolving along with development of the universe and its "spieces" (wether biological organisms, or particles during cooling from big bang).

This all gives us a drastically different perspective to things. In particular one, where the premises in the semiclassical information paradox don't quite hold.

Any my impression is that many reductionists will strongly adhere to the idea of timeless eternal laws? Instead they may say that there is a different between the real eternal laws, and our incomplete knowledge of them. And this is exactly what makes one look for a bigger and bigger supertheory, instead of focusing on the abductive mechanism on how nature implements the rules of what corresponds to laws.

How does an electron "know" what laws to obey? Of course it does not "know" in the human conscious senset, but still, HOW is the physically implemented, that it response as per apparently strict laws? In some way, it seems the electron must have be physical encoded structure that implies this. And during unification, how is this structured challenged. Is there a "DNA of physical law"?

I might recommend https://www.amazon.com/dp/0544245598/?tag=pfamazon01-20 as a background to why one would bother with these crazy ideas. The implied question of smolins argument is, HOW can we get predictability from evolution of law, without secretly adding some hidden metalaw - the metalaw dilemma. This is an open question and smolins sniffs some answers only, but to understand why one would bother create such a new hard question, the argument looking at crisis in physics is in smolins book.

/Fredrik
 
  • #60
martinbn said:
Is that written somewhere so that the rest of us can see what that view is? Also why? I thought that one strict rule here was that only peer reviewed sources are acceptable. Then why a view expressed on a forum is taken as the rule on terminology when peer reviewed sources, including textbooks, say that at least some ensemble interpretations do not have collapse!

The rule is peer reviewed sources, respected textbooks, course material from respected universities like MIT, and exceptions for things the mentors agree are OK - we occasionally get those.

The issue here is some textbooks make mistakes eg the famous one in Ballentine about Copenhagen. And most definitely we have had a number of peer reviewed papers discussed here with errors - it seems rife in the area of weak measurements and what it means. Also use of virtual particles is another such as in the peer reviewed derivation of the Casmir force by Milonni. That was considered OK at the time - but things do move on.

As far as collapse goes take The Emergent Multiverse by Wallace - he states categorically - page -22 - collapse is not part of the formalism of QM - same with Schlosshauer who has his own defintion that he thinks not all interpretations obey. But a course on OCW at MIT disagrees - they say it's part of the axioms of QM so all interpretations have it - see axiom 3b:
https://ocw.mit.edu/courses/nuclear...s-fall-2012/lecture-notes/MIT22_51F12_Ch3.pdf

We do not want to confuse people here so we will just stick with collapse being axiom 3b, which I think Atty agrees with..

Thanks
Bill
 
Last edited:

Similar threads

  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 12 ·
Replies
12
Views
7K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 212 ·
8
Replies
212
Views
27K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
12
Views
2K
  • · Replies 71 ·
3
Replies
71
Views
7K