Understanding MWI: A Newbie's Guide to Quantum Physics and the Multiverse

  • Thread starter Thread starter confusedashell
  • Start date Start date
  • Tags Tags
    Mwi
Click For Summary
The discussion revolves around the Many-Worlds Interpretation (MWI) of quantum physics, with participants expressing confusion about its implications for reality and personal identity. Key points include the notion that MWI suggests multiple universes exist simultaneously, leading to existential concerns about personal relationships and the nature of existence. Participants debate whether MWI implies constant movement between universes or if individuals remain in a consistent universe throughout their lives. There is a consensus that MWI is just one interpretation of quantum mechanics, lacking definitive observational evidence to validate its claims. Ultimately, the conversation highlights the emotional and philosophical challenges posed by MWI, urging a balanced approach to understanding quantum theory without losing touch with everyday reality.
  • #121
Fra -- Right on

Great to have an ally. The Bohr quote is great -- thanks.
Regards,
Reilly




Fra said:
IMO. The only thing that collapses is the projection that lives inside the observer, which I see no more weird than similar to a bayesian update.

Why my view of the world change when I receive more information about it, is quite obvious. Whatever the world REALLY is, still has to be projected onto my perspective.

I think these words of Niels Bohr's still stands:

"It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature...".

/Fredrik
 
Physics news on Phys.org
  • #122
reilly said:
Yes, I did not need to wave my credentials, and apologize for so doing. When I was in graduate school and then when I was teaching, physics was a contact sport -- more than once I got hammered when giving a seminar, and more than once did the hammer thing myself. I still, after more than 40 years, have a Pavlovian response when there's even a suspicion that my credentials or ideas are being challenged. Much to my chagrin, I do not always keep my cool under such circumstances.
Thanks for that, and no offense taken, I know from personal experience that it's especially easy on the internet to jump to conclusions about someone's tone or about what they're implying.
reilly said:
I know there are many who agree with me when I say that you never really understand QM until you have taught it, which means first a dissertation or long paper based on QM. In other words, you have to do it. Book learnin' is not enough. Got to deal with hbars, and 2pis, and signs, and tons of algebra with reality checks.

Your questions are perfectly reasonable.

Forget about shadow photons unless you want to get hooked into a long chain of contradictions.
Well, I wouldn't disagree with the idea that the best way to learn physics is by doing. But interpretations like the MWI or Bohmian mechanics involve a fair amount of mathematical elaboration too, I think it'd be a mistake to dismiss some element of an interpretation as self-contradictory just based on what one of the interpretation's advocates says in a nontechnical discussion aimed at a broad audience (of course you might also argue that thinking about 'interpretations' is pointless if they make no new predictions, even if the interpretation doesn't involve any internal contradictions).
 
  • #123
reilly said:
Worse yet, I believe in wave-function collapse; it occurs in people's brains as we gain knowledge of which alternative actually happens. There is absolutely no doubt that such a mental collapse occurs; we've all experienced such a collapse or change in mental state many times. You are stuck for a moment seeing someone you might have known once. Then "Aha, yes that's Ed from my previous job", That is, we get a change in mental state as our knowledge changes. And, people in the neurosciences are understand more and more how this collapse" occurs.

This knowledge-based approach was championed by the Nobelist Sir Rudolph Peierls. It ties into what I like to call the Practical Copenhagen Interpretation -- PCI. That is, use the Schrodinger Eq, or appropriate variations thereof to compute wave functions; and use Born's idea that the absolute square of the wave function is a probability density.

But the point is that the absolute square of the wave function isn't ALWAYS a probability density! So when does it *become* a probability density ? This is the main issue which leads to considerations of an MWI approach.
See, in the double slit experiment, when the wave function is a superposition of "is in left slit" and "is in right slit", it is NOT a probability density. For if it were, we wouldn't have any interference, as we would simply have that the final probability to have a hit in position X would be P(X | A) P(A) + P(X|B) P(B) where A stands for slit 1 and B stands for slit 2.

So along the way, the wavefunction evolves from something that clearly is NOT a probability density (when it is at the two slits) to something that IS a probability density (when it "hits the screen"). It is this undocumented "change in nature" of the wavefunction (from "a physical entity which is not a probability" to "a probability") which stands in the way of a pure "bayesian collapse" view on the measurement problem.

You can get around this in several ways. You can insist on the bayesian nature all the way, but then, in order to get out interference, you have to introduce extra elements. That's what the Bohmians do. The particle DID go through just slit 1 or just slit 2, and it was a complicated dynamics with a *physical* wavefunction which NEVER becomes a probability distribution which guides it to the interference image.
Or you can insist on the non-bayesian nature of the wavefunction all the way, accepting it as a purely physical state, and that's what MWI does for instance.
You can also alter the dynamics of the wavefunction, and give it a physical status all the way, at which point the "collapse" is just part of the (non-linear) dynamics.

But you cannot "sneak in a change in PoV" which is implicitly done when the wavefunction goes from (certainly not a probability function) on microscopic level to (a probability function) at the Heisenberg cut, without at least some explanation where this change came about, because the question that comes up then is:
why can't I consider, from the start, an electron orbital as just a probability density to find the electron ? And you know very well that if you do so, that you do not get out the same results than when you keep the electron wavefunction as a wavefunction and not a statistical mixture of positions.
 
  • #124
I agree with part of what Reily says but also with part of what Vanesch says. I don't know if my view is the same as Reilly's all the way but this is what I personally think:

A plain bayesian view _alone_ is insufficient (agreeing with Vanesh), but it's IMO most probably part of progress, but not end of story! That's why I think we need to go back to probability theory and think again, we need more...

IMO, the clearly identifiable problem is that the probability space is not clearly coupled to the current information and history like I think it should, this is why the arbitrary decomposition between coherent or non-coherent mixtures occur. I don't see this as a pure QM problem, it goes deeper and may relax unitarity because the probability space itself is uncertain and dynamical. I think this falls in the category that Vanesch calls the non-linear approach.

This is what I personally see as the route forward in the spirit of "minimum speculation".

vanesch said:
But the point is that the absolute square of the wave function isn't ALWAYS a probability density! So when does it *become* a probability density ?

I'd choose to say it becomes a probability density when more information about the "microstructure" of our system is collected, or when the probability space has stabilised (more or less the same thing). But of course this is a non-unitary process. The evolution isn't effectively unitary until the probability space has stabilised.

This is reasonably similar to some decoherence based ideas but one still need to quantify in terms of knowledge how certain we are that we have a closed system? usually we don't know and I consider that a fact, it just _seems to be closed_, and I suggest we quantify this. Also considering the system + observer, doesn't solve the point, it just moves the point.

I imagine the "probability space" as something that is emergent as the observer evolves in the environment, once the observer is equilibrated in it's environment I think the residual uncertainty will most probably be the ordinary unitary QM. But that's clearly a special case.

I can't wrap my head around unitary evolution in the general case. The unitary pictures needs to be selected.

/Fredrik
 
  • #125
I no longer understand half of this thread:P but anyway thanks a lot guys.
I've decided not to put too much faith into MWI, not only because it's bizarre, but I was handed some evidence and tests done against it which convince me it's bull****.
I think MWI is built pure on wishful thinking of some and convictionby others.
Since there's NO proof or anything indicating MWI to be true, I think I'll go on living in reality without "reconsidering" how it works:PP

Thanks a lot
 
  • #126
Confused, I'd suggest you try to spend some time thinking on your own. There is usually no replacement for thinking on your own and coming up with our own conclusions. Learning from others is great but some things you just have to decide for yourself. If you feel you can not make such a decision I suggest you read up on the foundations and the philosophical issues with measurements and go through the pain :) Chances are you haven't appreciated the question to start with, when faced with the possible answer to these questions when then of course make no sense because it's difficult to relate to.

/Fredrik
 
  • #127
Hans de Vries said:
This is not the point. I'm simply objecting to the lingo he uses, like: "The outcome of this experiment depends on events in another universe" ...
while he describes a simple optical interference experiment. One can have an interpretation hypothesis but don't preach it as being an absolute truth.

OK, I agree that he should use some precautions in introducing the interpretation. However, concerning the experiment, in lecture 2 he's describing a single-photon setup, so as it can be correctly viewed as a quantum computation without classical analog.
 
  • #128
Fra said:
Confused, I'd suggest you try to spend some time thinking on your own. There is usually no replacement for thinking on your own and coming up with our own conclusions. Learning from others is great but some things you just have to decide for yourself. If you feel you can not make such a decision I suggest you read up on the foundations and the philosophical issues with measurements and go through the pain :) Chances are you haven't appreciated the question to start with, when faced with the possible answer to these questions when then of course make no sense because it's difficult to relate to.

/Fredrik
Good advice.
it's just, MWI, seems retarded to me, from common sense and what i know of science and quantum physics someone on here provided me links to proof against MWI so.
Dno why it's even wasted more money on researching, guess its like... faith lol:P

I maen in one universe, you'd take lastic surgery to look like ur mom after rapin and killin her, just because it's physically possible. I maen seriously how the **** can anyone believe in **** like this
 
Last edited:
  • #129
Vanesch, have you been thinking about writing a long pedagogic paper on MWI where all FAQs would be clearly and honestly answered? I strongly believe that you could do that very well, perhaps better than any already existing paper on that subject. Next time when somebody asks a question, you just tell him - go and read Section XX in my paper on arXiv. If you do that paper well (as I am sure you could) you could even publish this paper in a journal like Foundations of Physics.
 
  • #130
confusedashell said:
Good advice.
it's just, MWI, seems retarded to me, from common sense and what i know of science and quantum physics someone on here provided me links to proof against MWI so.
There are no "proofs" against MWI around. Also, common sense is useless, since many scientific theories like relativity, also go against common sense and they are nevertheless correct. There are two kind of people who could be interested in MWI. The first, is scientists and philosophers of science, and they of course need to deal with MWI seriously, and it is not a waste of time at all. The second, is everyone of us in their daily life, and in this case you don't need anything like an interpretation of quantum theory as a foundation for your actions and moral life like you're trying to. Better go around with your girlfriend!

confusedashell said:
I maen in one universe, you'd take lastic surgery to look like ur mom after rapin and killin her, just because it's physically possible.
No, you would not do that even in MWI.
 
Last edited:
  • #131
if u don't do every possible action then MWI is ultimately wrong. so mwi is ultimately wrong. Common sense:) 2+2 still gon be 4 even if some scifi professor makes up some untestable claims.
Untesteable = non science in my eyes. Philosohpy at best in MY eyes.

I declare MWI dead and won't waste more time on it, but please use this thread to share ur ideas for and against it:)

http://www.boingboing.net/2004/04/26/many-worlds-theory-i.html << against MWI
 
  • #132
If MWI is the wrong answer, to what question is it the wrong answer? :) All I know is that it does not seem like the answer to any questions of mine. But then I need to be humble enough to confess I can not and should not speak for others.

Perhaps the trick is to pose the right question, and then MWI may seem more reasonable. My heavy impression from talking to people on here as well as reading the posts is that ultimately we disagree on the formulating of the core questions and what is the eye of the problems that determines the direction of our efforts. This is reflected in the questions we ask.

Like has been pointed out many times, choosing the questions and finding the answers are closely related.

/Fredrik
 
  • #133
confusedashell said:
if u don't do every possible action then MWI is ultimately wrong. so mwi is ultimately wrong. Common sense:)
Wrong :-). Look, when you decide to do something, you're using your rational faculties, isn't it? You have some reason to do it, it's not like you do it at random. Just like a computer will say that 1+1=2 in almost all universes, and not any result at random. So even in MWI, in almost all other universes, you will use your reason in exactly the same way, so as you will not do nasty things if you are not a nasty person to start with.

PS/ That boingboing article is relating known factoids which were proven wrong. Afshar experiment didn't invalidate anything, since standard quantum theory predicts exactly the same results. Since MWI has the same predictions as standard quantum theory, it can't be invalidated by such experiment.
 
Last edited:
  • #134
confusedashell said:
if u don't do every possible action then MWI is ultimately wrong. so mwi is ultimately wrong. Common sense:) 2+2 still gon be 4 even if some scifi professor makes up some untestable claims.
Untesteable = non science in my eyes. Philosohpy at best in MY eyes.

I declare MWI dead and won't waste more time on it, but please use this thread to share ur ideas for and against it:)

http://www.boingboing.net/2004/04/26/many-worlds-theory-i.html << against MWI

This famous "Afshar experiment" and its erroneous analysis was what Hans de Vries was referring to earlier, and reduces to a classical optics experiment, misunderstood.

The so-called transactional interpretation has 2 problems: first of all, things need to come "from the future", and second, I've never seen it being expanded beyond the single-particle situation. Now, maybe I'm wrong on the second point.

In any case, there cannot be an invalidation of MWI without also an invalidation of quantum theory on a certain level. It is not impossible, and it would surely be interesting news, but as long as quantum theory is assumed correct, there's no way to invalidate MWI. Copenhagen is even worse, because Copenhagen can handle it both ways (it has the extra freedom of choosing where to put the Heisenberg cut).

So you will NEVER read that MWI has been falsified. You might read one day that *quantum theory* has been falsified. MWI falls then with it. Copenhagen can still survive.

People claiming that MWI has been proven, or that MWI has been falsified, independent of quantum theory itself, are obviously not knowing what they are talking about: it is an *interpretation* of the formalism. As such, it makes exactly the same predictions as the formalism. Only, MWI needs the validity of the quantum formalism for macroscopic systems, which might not be applicable. If it is not applicable, then we've found a LIMIT to the applicability of the *quantum formalism*. That's much bigger news than an interpretation being correct or not!
 
  • #135
vanesch -- I disagree; I have no idea why the absolute square of a wave function can be anything else but a probability density, given Born's ideas.

Your probability density argument disagrees with both QM and classical electrodynamics; with any system subject to a linear wave equation. Huygens' Principle guarantees that the absolute square of the wave function, or intensity , contains interference terms. Why? Always, according to Huygens, a solution to a wave equation can be expressed as a sum, so the intensity contains interference terms -- if the sum has two or more terms.

Another way to look at it is to go from a scattering problem -> Fraunhofer Diffraction.

Forgetting that initial conditions for a diffraction problem are tricky, one could state that the initial value of the wave function is 0, except for two disjoint finite intervals with values, w1 and w2. Use a Feynman propagator, the Lippman-Schwinger or resolvant approaches, and effectively what you see is sort of equivalent to two beams, one from each slit, and these beams interact and thus scatter.

Re Bayesian collapse: we talk about the probabiity of an event -- or more than one. All QM normally does is to predict an event, and nothing after the event. There's no real problem of Bayesian collapse or inference. Suppose you are in traffic and figure the odds of making the next light, are, say 2-1. Once you get to the light, you know, so the probability estimate is irrelevant -- unless you are considering a series of lights, or ..

And the QM dynamics guarantee that the norm of the solution is invariant under time translations , except for time dependent Hamiltonians,..., so, it seems to me, that

1. QM gives a perfectly reasonable probability system -- with wave functions or more generally, density matrices. The absolute square of any solution of a linear wave equation provides a valid probability measure.

2. If, as we do in probability, talk about events, much of the mystery of measurement goes away.Prob. theory assumes only that we can recognize events, and distinguish between them. Einstein talked about events and their relationships, but didn't say much about the details of reading a clock. It seems to me that if we keep things simple we can do much of what we want to do -- simple being the use of probability theory's and Einstein's events, and assuming that we can make the appropriate measurements.

3. A theory of measurements is, in my view, way beyond us. But that's not a problem as we do pretty well without one- true classically and "quantumly". Would it be better to have such a theory? Yes, of course.

Regards,
Reilly
vanesch said:
But the point is that the absolute square of the wave function isn't ALWAYS a probability density! So when does it *become* a probability density ? This is the main issue which leads to considerations of an MWI approach.
See, in the double slit experiment, when the wave function is a superposition of "is in left slit" and "is in right slit", it is NOT a probability density. For if it were, we wouldn't have any interference, as we would simply have that the final probability to have a hit in position X would be P(X | A) P(A) + P(X|B) P(B) where A stands for slit 1 and B stands for slit 2.

So along the way, the wavefunction evolves from something that clearly is NOT a probability density (when it is at the two slits) to something that IS a probability density (when it "hits the screen"). It is this undocumented "change in nature" of the wavefunction (from "a physical entity which is not a probability" to "a probability") which stands in the way of a pure "bayesian collapse" view on the measurement problem.
 
  • #136
xantox said:
OK, I agree that he should use some precautions in introducing the interpretation. However, concerning the experiment, in lecture 2 he's describing a single-photon setup, so as it can be correctly viewed as a quantum computation without classical analog.

Despite the single photon setup, nothing in the outcome of the experiment
depends on the quantum behavior of photons. That is, an experiment with
bursts of sound waves or water waves would lead to the same interference
result.

An experiment which does show single photon quantum behavior for instance
uses two detectors at both outputs of a beamsplitter (typically a Wollaston
prism) and demonstrate that only one of the two detectors goes of after
a single photon went through the beamsplitter.


Regards, Hans
 
  • #137
Hans de Vries said:
Despite the single photon setup, nothing in the outcome of the experiment depends on the quantum behavior of photons. That is, an experiment with bursts of sound waves or water waves would lead to the same interference result.
Yes, but since in this setting there is no water, I think we have to explain what happens by using single photons, and here the classical explanation fails. It is not about understanding interference, since it's a course in quantum computation. If we send each photon through a beam splitter, followed by a mirror, followed by a beam splitter, and try to interpret this by using classical bits, we would have a coin flip followed by a NOT operation followed by a coin flip, which doesn't yeld the identity operation performed by the quantum system.
 
  • #138
reilly said:
vanesch -- I disagree; I have no idea why the absolute square of a wave function can be anything else but a probability density, given Born's ideas.

Your probability density argument disagrees with both QM and classical electrodynamics; with any system subject to a linear wave equation. Huygens' Principle guarantees that the absolute square of the wave function, or intensity , contains interference terms. Why? Always, according to Huygens, a solution to a wave equation can be expressed as a sum, so the intensity contains interference terms -- if the sum has two or more terms.

The difference between a classical wave superposition and a quantum interference is the following: in the classical superposition of a wave from slit 1 and a wave from slit 2, we had not a PROBABILITY 50% - 50% that the "wave" went through slit 1 or slit 2, it went physically through BOTH. In classical physics, intensity has nothing to do with probability, but just the amount of energy that is in a particular slot. HALF of the energy went through slit 1 and HALF of the energy went through slit 2 in the classical wave experiment.

But we cannot say that "half of the particle" went through slit 1 and "half of the particle" went through slit 2, and then equate this with the *probability that the entire particle* went through slit 1 is 50% and the probability that it went through slit 2 is also 50%, because if that were true, we wouldn't have a LUMPED impact on the screen, but rather 20% of a particle here, 10% of a particle there, etc...

Now, it is so that the application of the Born rule to a quantum optics problem gives you in many cases just the classical intensity as a probability, which instores somehow the confusion between "classical power density" and "quantum probability" but these are entirely different concepts. The classical power density has nothing of a probability density in a classical setting, and a probability has nothing of a power density in a quantum setting.

The classical superposition of waves has as much to do with probabilities as the superposition of, say, forces in Newtonian mechanics. Consider the forces on an apple lying on the table: there's the force of gravity, downward, and there's the force of reaction of the table, upward. Now, does that mean that we have 50% chance for the apple to undergo a downward fall, and 50% chance for it to be lifted upward ? This is exactly the same kind of reasoning that is applied when saying that two classical fields interfere is equivalent to two quantum states interfering. The classical fields were BOTH physically there (just as the two forces are). Their superposition gives rise to a certain overall field (just as there is a resultant force 0 on the apple). At no point, a probability is invoked.

But in the quantum setting, the wavefunction is made up of two parts. If this "made up of two parts" is interpreted as a probability at a certain point, you'd be able to track the system on the condition that case 1 was true, and to track the system on the condition that case 2 was true. But that's not what is the result when you compute the interference of the two parts. So when the wavefunction was made up of 2 parts, you CANNOT consider it as a probability distribution. But when the interference pattern is there, suddenly it DID become a probability distribution. THAT'S where a sleight of hand took place.

Forgetting that initial conditions for a diffraction problem are tricky, one could state that the initial value of the wave function is 0, except for two disjoint finite intervals with values, w1 and w2. Use a Feynman propagator, the Lippman-Schwinger or resolvant approaches, and effectively what you see is sort of equivalent to two beams, one from each slit, and these beams interact and thus scatter.

Absolutely. But classically, that would mean that BOTH beams are physically present, and NOT that they represent a 50/50 probability!


Re Bayesian collapse: we talk about the probabiity of an event -- or more than one. All QM normally does is to predict an event, and nothing after the event. There's no real problem of Bayesian collapse or inference. Suppose you are in traffic and figure the odds of making the next light, are, say 2-1. Once you get to the light, you know, so the probability estimate is irrelevant -- unless you are considering a series of lights, or ..

And the QM dynamics guarantee that the norm of the solution is invariant under time translations , except for time dependent Hamiltonians,..., so, it seems to me, that

1. QM gives a perfectly reasonable probability system -- with wave functions or more generally, density matrices. The absolute square of any solution of a linear wave equation provides a valid probability measure.

This is ONLY true when ACTUAL measurements took place! You cannot (such as at the slits in the screen) "pretend to have done a measurement" and then sum over all cases, which you can normally do with a genuine evolving probability distribution.
If the wavefunction decomposes, at a time t1, into disjoint states A, B and C, then, if it were possible to interpret the wavefunction as a probability density ALL THE TIME, it would mean that we can do:

P(X) = P(X|A) P(A) + P(X|B) P(B) + P(X|C) P(C)

and that *doesn't work* in quantum theory if we don't keep any information (by entanglement for instance) about A, B or C. So when we have the wavefunction expanded over A, B and C, we CANNOT see it as giving rise to a probability distribution.

Now, of course, for an actual setup, quantum theory generates a consistent set of probabilities for observation in the given setup. But the way the wavefunction behaves *in between* is NOT interpretable as an evolving probability distribution.
So the wavefunction is sometimes giving rise to a probability distribution (namely at the point of "measurements"), but sometimes not.

Now, you can take the attitude that the wavefunction is just "a trick relating preparations and observations". Fine. Or you can try to give a "physical picture" of the wavefunction machinery. Then you're looking into interpretations.
 
  • #139
What I personally meant with "going back to the origin and axioms of probability theory" for the resolution, and that I thought Reilly also referred to is something very basic, but that I think is causing a lot of problems.

It's the idea that while we cannot with arbitrary precision determine the future, we CAN with arbitrary precision determine the probability for any possibility. This simple statement occurring for example in the first chapter of Diracs "The Principles of Quantum Mechanics" is IMO where we lay ground for the future problems.

Then there are attempts to deal with this problem with density matrixes and non-coherent mixtures and decoherence. But so far I've personally found that insufficient.

This I see touching the philosophy of probability theory and information theory, and is thus not QM-specific as such.

The real problem I see is that the usual argument is the frequentist idea that we can find the probability by carrying out an infinite number of experiments. Why this is highly unsatisfactory should be clear if you consider processing power and information capacity. We can not make imaginary use of information the belongs to the future.

My personal questions starts here. How can we improve the foundation?

I am trying to find a possible revision of the probability formalism which introduces a fuzzier(non-unitary) formalism, where the probabilities as we know them today will be more appropriately seen as "subjective estimated" probabilities, induced from *incomplete information* and thus the probability space itself is not yet known.

The idea I favour is that these estimated probabilities are encoded in the microstructure of the observer, and that the probabilities simply correspond to uncertainty in the observers microstructure and state, which in turn is a reduced projection of the environment.

This will I hope yield standard probability as emergent. And the idea is to assign also a physical basis for the formalism itself, and the probability space. Usually this is carelessly done and one imagines infinite amounts of data and infinite experiments to justify the probability, but rarely does one consider in what physical structures this information is encoded? Some decoherence ideas considers the environment as an information sink, which records everything in correlactions, but that is also a problem because the observer only sees a fraction of the environment. So there is still something missing.

Reilly is this anything like you had in mind as well or what did you mean with "going back to probability theory"

/Fredrik

Edit: Another issue is that I don't think it's in general valid to consider the environment as an infinite information sink. One can imagine a case where the observers informationcapacity is comparable to the environment, then the sink idealisation fails. And I guess the key is that an observer does not have an a priori knowledge of the size of the environment where it's immersed. The only way to find out is to interact and try to learn something.
 
Last edited:
  • #140
Fra said:
Edit: Another issue is that I don't think it's in general valid to consider the environment as an infinite information sink.

This is probably a very good approximation when it regards human particle physics experiments, because then in effect we control and monitor the effective environment of the localised experiment and we could probably quite well in principle inform us about correlations in the environment with the system.

But this idealisation doesn't seem near as appealing if one considers cosmology or gravity interactions, or cases where the observers complexity is far less than the complexity of the system under study.

/Fredrik
 
  • #141
i'm probalby doing a mistake by adding this but...

vanesch said:
you do not get out the same results than when you keep the electron wavefunction as a wavefunction and not a statistical mixture of positions.

We attack this differently but I think I see Vanesch point here. I am not quite prepared to present my view consistently yet (I'm working on it) but my information theoretic ideas to explain the connection between statistical (impure) mixtures and pure mixtures depends on how we define "addition of information".

I technically see different measurements, belonging to different probability spaces. And the exact relation between the spaces is needed to define addition of information.

Since normally conditional probabilities is defined like

<br /> P(A|B) := \frac{P(A \cap B)}{P(B)}<br />

The question is what P(A|B) supposedly means unless they belong to the same event space?

This is IMO a _part_ key to explain the reason for different ways to "add information". And when we are dealing with systems of related "probability spaces" in between which we make transformations, then it does matter in which space we make the addition. This is something that isn't analysed to satisfaction normally. Normally the defining relation between momentum and position are postulated, one way or the other. This postulates away something I personally think there is a better explanation to.

I think there is a way to grow a new space based on the uncertainty in the existing one. New relations are defined in terms of patterns in the deviations of existing relations, and relations are "selected" as per a mechanics similar to natural selection. The selected relations implements an efficient encoding and retains a maxium amount of information about the environment in the observer.

The limit is determined by the observers information capacity, and beyond that the only further "progress" that can be made is for the observer/system to try to increase it's mass. Here I expect a deep connection to gravity.

Constraining the information capacity is also I think the key to staying away from infinities and generally divergent calculations. There will be a "natural cutoff" due to the observers limited information capacity.

I'm sorry if this makes no sense, but I hope that given more time I'll be able to put the pieces together, and it's designed to be all in the spirit of a minimum speculation information reasoning that I personally consider to be the most natural extension to the minimalist interpretation.

/Fredrik
 
  • #142
Fra said:
I technically see different measurements, belonging to different probability spaces.
I'm skeptical. Example, please?
 
  • #143
Hmmm... you should be skeptical of course :) And I haven't yet finished this enough to present this properly, this is why I should perhaps be quiet, but consider different "measurements" (operators if you like), then each such measurements can loosely speaking be though of to span a probability space of it's own (the event space defined by the span of the possible events) - but how does different measurements relate to each other?

Then usually the relation between these operators are postulated as commutator relations etc, but if you instead consider the measuremetns or "operators" are dynamical objects then new possibilities open up. But how has the world evolve? does nature postulate relations? I think there is a more natural way view the relations.

However I can't give a complete explanation to this atm, that's why I perhaps should be quiet. I was curious if Reilly recognizes any of this or not. But from his last comments I fear not.

/Fredrik
 
  • #144
comment on immature ideas

An example would be measuring position and measuring momentum.

We could postulate existence of two different observables and then postulate their relation, or do we postulate one observable and try to see if there is a principle that induces naturally complementing observables in a context of the first one an as needed basis?

If we take the set of distinguishable configurations (x) to make up configuration space, then all we can do is to see what the relative frequency is of the retained history. But what if we are to try to find a transformation, and use part of our memory (state of the observers microstructure in the general case) to store a transformed pattern which defines a new event space, could that give us lower uncertainty?

This transformation then relates two event spaces (probability spaces).

We can use a Fourier transform to define momentum (p) and try to find momentum patterns, but why Fourier transform? Is there a principle which renders this transform unique?

Given a limited information capacity, it seems the possible transformations must also be limited and transformation/patterns will be induced spontaneously. I have no answer on this, but my but feeling says there is one and I'm looking for it.

But there is intercommunication between the spaces, they can shrink or inflate at the expense of the other, as well as grow new dimensions, and there is supposedly a particular configuration that is "most favourable" in a given environment, where the observers microsctructure is selected for maximum fitness given the limited information capacity.

What I am trying but haven't yet succeeded with is to build structures starting from a minimalist starting point of a boolean observable which can be seen as an axiom claiming there is a concept of distinguishability defined to each observer. From that complexity and new structures are built as the observer is interacting with the environment. The relations formed are selected by the interaction with the environment.

So higher dimensions are built as extensions and aggregates from a basic notion of distinguishability. The dimensions and structures are emergent in the learning/evolution process. Structures selected represents an efficient encoding of information. Inefficient structures are destabilised and will loose information capacity because they consistently fail to keep up with the environmental perturbations.

Anyway the idea is that then all observables are connected by a natural connection, and we can "add information" from any part of the structure but the actual "additions" would have to be made by respecting the selected connections. It may give rise to apparently twisted statistics, but when seen from the unified view, with all the information transformed into the original event space, I think it will take on a simple view.

This when understood isn't just "an interpretation" that makes no difference, it could be realized as a complete self correcting strategy, which is why I personally like it.

My point in mentioning admittedly incomplete and in progress ideas is only to provoce the questions and possible ways in the spirit of information ideas and talking about going back to probabilit theory. I still think we choose to focus on different things.

I have no idea how long time it takes to mature this. I wouldn't be surprised if it takes several years considering that this is my hobby time, but I know there are other who share the ideas, and I look forward to seeing more work along these lines so I like to encourage any ideas relating to this, because I realize it's a minority idea.

/Fredrik
 
  • #145
small note

vanesch said:
P(X) = P(X|A) P(A) + P(X|B) P(B) + P(X|C) P(C)

and that *doesn't work* in quantum theory if we don't keep any information (by entanglement for instance) about A, B or C. So when we have the wavefunction expanded over A, B and C, we CANNOT see it as giving rise to a probability distribution.

I guess this obvious but a reason for this can still be seen within normal probability is because it's using yet another hidden implicit condition, namely the microstructure/background(M) that A B and C refers to. Making this dependence explicit we get

P(X|M) = P(X|A) P(A|M) + P(X|B) P(B|M) + P(X|C) P(C|M)

but
<br /> P(X) = \sum_{M} P(X|M)P(M)<br />

If this is a sum over all possible background.

and thus
<br /> P(X) \neq P(X|A) P(A|M) + P(X|B) P(B|M) + P(X|C) P(C|M)<br />

unless A B and C make a complete partitioning, which is exactly related to the selection of a particular background M.

I ultimately envision that this background makes up the identity of the observer, and contains a selection of retained history.

The only real problem here is how does one in a sensible way resolve the set of all possible backgrounds? This is what I've been given some thought and this problem is what gives rise to dynamics. It is not possible to resolve all possible backgrounds just like that, since it somehow must involve information storage and computing. This is why i think that summation must be intrisically related to change and ultimately time. Given any set of microstructures, one can always raise the same complaint. Which suggests that a sensible solution a non-unitary evolution.

The ultimate reason for this is that we do not easily "measure our probability space" without getting into silly things like infinite experiments and infinite data storage.

/Fredrik
 
  • #146
vanesch:

I'm looking at the mathematical properties of wave equation solutions. In particular, that wave function norms are preserved by wave equations guarantees that the absolute square of any normalized wave function can be viewed as a probability measure, we'll call it QP for quantum probability, energy density for others, intensity, or what have you. I agree that, particularly in classical physics, the probability density associated with a wave equation has only a formal meaning, and is usually differently interpreted.

Thus, it seems to me that the QP is indeed always a probability density, given it's mathematical properties.. Why not?

vanesch said:
The difference between a classical wave superposition and a quantum interference is the following: in the classical superposition of a wave from slit 1 and a wave from slit 2, we had not a PROBABILITY 50% - 50% that the "wave" went through slit 1 or slit 2, it went physically through BOTH. In classical physics, intensity has nothing to do with probability, but just the amount of energy that is in a particular slot. HALF of the energy went through slit 1 and HALF of the energy went through slit 2 in the classical wave experiment.

But we cannot say that "half of the particle" went through slit 1 and "half of the particle" went through slit 2, and then equate this with the *probability that the entire particle* went through slit 1 is 50% and the probability that it went through slit 2 is also 50%, because if that were true, we wouldn't have a LUMPED impact on the screen, but rather 20% of a particle here, 10% of a particle there, etc...
Why would I want to say, "half the particle went through slit 1,...?" Such a statement makes no sense, except, perhaps, as highly figurative language.
In the symmetrical case, the initial conditions for slits state that the probability to find A in the left slit is equal to the probability to find A in the right slit. It's either there or here, but not in both places. The way we describe such a case is with probability in a specified portion of configuration space. As I noted above, I've been talking mainly about the mathematical parallels amongst wave equations. The interpretation of the equations and solutions can be all over the map.

[/QUOTE]
vanesch said:
Now, it is so that the application of the Born rule to a quantum optics problem gives you in many cases just the classical intensity as a probability, which instores somehow the confusion between "classical power density" and "quantum probability" but these are entirely different concepts. The classical power density has nothing of a probability density in a classical setting, and a probability has nothing of a power density in a quantum setting.

Right, I agree. As I noted above, I've been talking mainly about the mathematical parallels amongst wave equations. The interpretation of the equations and solutions can be all over the map.

vanesch said:
The classical superposition of waves has as much to do with probabilities as the superposition of, say, forces in Newtonian mechanics. Consider the forces on an apple lying on the table: there's the force of gravity, downward, and there's the force of reaction of the table, upward. Now, does that mean that we have 50% chance for the apple to undergo a downward fall, and 50% chance for it to be lifted upward ? This is exactly the same kind of reasoning that is applied when saying that two classical fields interfere is equivalent to two quantum states interfering. The classical fields were BOTH physically there (just as the two forces are). Their superposition gives rise to a certain overall field (just as there is a resultant force 0 on the apple). At no point, a probability is invoked.

But in the quantum setting, the wave function is made up of two parts. If this "made up of two parts" is interpreted as a probability at a certain point, you'd be able to track the system on the condition that case 1 was true, and to track the system on the condition that case 2 was true. But that's not what the result is when you compute the interference of the two parts. So when the wave function was made up of 2 parts, you CANNOT consider it as a probability distribution. But when the interference pattern is there, suddenly it DID become a probability distribution. THAT'S where a sleight of hand took place.

What are the two parts?

Below, I beg to differ. That is, I said "effectively" and "sorta equivalent" to two beams -- we are talking a metaphor, helps some of us in understanding the two slit situation. That is, the two slit expt.. is much like a scattering one -- the mathematics describing the two slit QM experiment is very similar to that for scattering..-- look at the Lippman-Schwinger eq. for example. Again, it's as if there were two beams emanating from the slits.
vanesch said:
Absolutely. But classically, that would mean that BOTH beams are physically present, and NOT that they represent a 50/50 probability!

This is ONLY true when ACTUAL measurements took place! You cannot (such as at the slits in the screen) "pretend to have done a measurement" and then sum over all cases, which you can normally do with a genuine evolving probability distribution.
If the wave function decomposes, at a time t1, into disjoint states A, B and C, then, if it were possible to interpret the wave function as a probability density ALL THE TIME, it would mean that we can do:

P(X) = P(X|A) P(A) + P(X|B) P(B) + P(X|C) P(C)

and that *doesn't work* in quantum theory if we don't keep any information (by entanglement for instance) about A, B or C. So when we have the wave function expanded over A, B and C, we CANNOT see it as giving rise to a probability distribution.

Now, of course, for an actual setup, quantum theory generates a consistent set of probabilities for observation in the given setup. But the way the wave function behaves *in between* is NOT interpretable as an evolving probability distribution.
So the wave function is sometimes giving rise to a probability distribution (namely at the point of "measurements"), but sometimes not.
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Now, you can take the attitude that the wave function is just "a trick relating preparations and observations". Fine. Or you can try to give a "physical picture" of the wave function machinery. Then you're looking into interpretations.
Conditional probabilities are a bit tricky at the wave function level, but not so much at the density matrix level. In your example,

P(X) = | WA(X)+WB(X) +WC(X)| ^^2, where WA is the wave function for state A.

Or, P(X) = WA^^2+ 2*WA*WB + …..

(I've dropped the X and the complex conjugate signs)

Now this is a quadratic form that can be diagonalized, which means three disjoint states with probability densities once they are properly normalized. These states are linear combinations of all states, A,B,C, These disjoint states are nicely described by the standard Bayes Thrm , which you site above, and contain interference terms between A,B,C.

Regards,
Reilly
 
Last edited:
  • #147
reilly said:
Thus, it seems to me that the QP is indeed always a probability density, given it's mathematical properties.. Why not?

Well, for the very reason I repeat again. If we take the wavefunction of the particle, and we let it evolve unitarily, then at the slit, the wavefunction takes on the form:
|psi1> = |slit1> + |slit2>
which are essentially orthogonal states at this point (in position representation, |slit1> has a bump at slit 1 and nothing at slit 2, and vice versa).

Now, if this is to have a *probability interpretation*, then we have to say that at this point, our particle has 50% chance to be at slit 1 and 50% chance to be at slit 2, right ?

A bit later, we evolve |psi1> unitarily into |psi2> and this time, we have an interference pattern. We write psi2 in the position representation, as:
|psi2> = sum over x of f(x) |x> with f(x) the wavefunction.

This time, we interpret |f|^2 as a probability density to be at point x.

Now, if at the first instance, we had 50% chance for the particle to be at slit 1, 50% chance to be at slit 2, then it is clear that |f|^2 = 0.5 P(x|slit1) + 0.5 P(x|slit2), because this is a theorem in probability theory:

P(X) = P(X|A) P(A) + P(X|B) P(B)

if events A and B are mutually exclusive and complete, which is the case for "slit 1" and "slit 2".

But we know very well that |f|^2 = 0.5 P(x|slit1) + 0.5 P(x|slit2) is NOT true for an interference pattern!

So in no way, we can see |psi1> as a probability density to have 50% chance to go through slit 1 and 50% chance to go through slit 2.

Conditional probabilities are a bit tricky at the wave function level, but not so much at the density matrix level. In your example,

P(X) = | WA(X)+WB(X) +WC(X)| ^^2, where WA is the wave function for state A.

Or, P(X) = WA^^2+ 2*WA*WB + …..

(I've dropped the X and the complex conjugate signs)

Now this is a quadratic form that can be diagonalized, which means three disjoint states with probability densities once they are properly normalized. These states are linear combinations of all states, A,B,C, These disjoint states are nicely described by the standard Bayes Thrm , which you site above, and contain interference terms between A,B,C.

Regards,
Reilly

The point is that a pure state, converted into a density matrix, after diagonalisation, always results in a TRIVIAL density matrix: zero everywhere, and a single ONE somewhere on the diagonal, corresponding to the pure state (which is part of the basis in which the matrix is diagonal).

As such, your density matrix will simply tell you that you have 100% probability to be in the state...

If you don't believe me, for a pure state, we have that rho^2 = rho. The only diagonal elements that can satisfy this are 0 and 1. We also have that Tr(rho) = 1, hence we can only have one single 1.
 
Last edited:
  • #148
vanesch said:
the wavefunction takes on the form:
|psi1> = |slit1> + |slit2>
which are essentially orthogonal states at this point (in position representation, |slit1> has a bump at slit 1 and nothing at slit 2, and vice versa).

I don't see the deduction that they are orthogonal? I consider this an assumption originating from the use of "classical intuition", not something we know.

vanesch said:
because this is a theorem in probability theory:

P(X) = P(X|A) P(A) + P(X|B) P(B)

if events A and B are mutually exclusive and complete, which is the case for "slit 1" and "slit 2".

I agree completely that it's difficult to find the proper interpretation, but before giving up I have two objections.

(1) Assumption of mutual exclusive options.

In general we have
<br /> P(x|b \vee c) = \left[P(x|b)P(b) + P(x|c)P(c) - P(x|b \wedge c)P(b \wedge c) \right]\frac{1}{P(b \vee c)}<br />

(2) Ambigousness in mixing information from different probability spaces.

Even accepting the forumla above, it seems non-trivial to interpret this in terms of

<br /> \psi\psi^* = \frac{|b|^2|\psi_b|^2 + |c|^2|\psi_c|^2 + 2Re(bc^* \psi_b \psi_c^*) }{\sum_x |b\psi_b(x) + c\psi_c(x)|^2}<br />

IMO, x and b does not belong to the same event space, considered as "simple sets", therefor the definition of P(x|b) needs to be analysed.

P(x|b) is defined only if the conjunction of x anb b i defined.

So there is a problem here, I agree with Vanesch. But I think that the solution is to define the relation between the two spaces where x and b belongs. This relation will provide a consistent definition of P(x|b). Right not P(x|b) is intuitive, but the formal definition is not - it is dependent on a background defining the relation between the two observables. This is where I am personally currently looking for the connection that will restore a consistent "probabilistic" reasoning. I think we both need a revision of priobability theory itself, in the sense that he probability space NEED to be allowed to by dynamical, anything else just seems hopeless AND we need to understand how the relations between observables are formed. Postulating relations are guesses, I suggest this guessing is made more systematical.

/Fredrik
 
Last edited:
  • #149
Fra said:
I don't see the deduction that they are orthogonal? I consider this an assumption originating from the use of "classical intuition", not something we know.

No, it is pretty simple and straightforward. Consider the slits on the x-axis: slit one is from x = 0.3 to x = 0.4 and slit 2 is from x = 1.3 to 1.4.
The wavefunction at the slit, psi, will be a function that is zero outside of the above intervals. So we can write psi(x) as psi1(x) + psi2(x), where psi1(x) will have non-zero values between 0.3 and 0.4, and be 0 everywhere else, while psi2(x) will be non-zero only between 1.3 and 1.4, and zero everywhere else.

As such, psi1(x) and psi2(x) are orthogonal, because integral psi1(x) psi2(x) dx is 0.

I agree completely that it's difficult to find the proper interpretation, but before giving up I have two objections.

(1) Assumption of mutual exclusive options.

In general we have
<br /> P(x|b \vee c) = \left[P(x|b)P(b) + P(x|c)P(c) - P(x|b \wedge c)P(b \wedge c) \right]\frac{1}{P(b \vee c)}<br />

Yes, but if the wavefunction was to be interpreted as a probability density, then there was 50% chance for psi1 and 50% chance for psi2 and nothing else, because, exactly, psi1 and psi2 are orthogonal. In other words, we can see them as elements of an orthogonal system in which we apply the Born rule (that's what it means, to give a probability density interpretation to a wavefunction: pick an orthogonal basis, and apply the Born rule).
The events corresponding to two orthogonal states are mutually exclusive.

(2) Ambigousness in mixing information from different probability spaces.

Even accepting the forumla above, it seems non-trivial to interpret this in terms of

<br /> \psi\psi^* = \frac{|b|^2|\psi_b|^2 + |c|^2|\psi_c|^2 + 2Re(bc^* \psi_b \psi_c^*) }{\sum_x |b\psi_b(x) + c\psi_c(x)|^2}<br />

IMO, x and b does not belong to the same event space, considered as "simple sets", therefor the definition of P(x|b) needs to be analysed.

I'm fighting the claim that the wavefunction can ALWAYS be interpreted as a probability density. I'm claiming that this statement is not true and that at moments, you CANNOT see the wavefunction as just a probability density. You can in fact ONLY see it as a probability density when "measurements" are performed, but NOT in between, where it has ANOTHER (physical!) meaning, in the sense of things like the classical electromagnetic field. As such, one cannot simply wave away interpretational issues such as wavefunction collapse as just "changes of probability of stuff because we've learned something".

I'm trying to show here that the wavefunction is, most of the time, NOT to be seen as a probability density, but "jumps into that suit" when we say that we do measurements.

We take as basic example the 2-slit experiment, and I'm trying to show that when the wavefunction is at the two slits, it cannot be seen as just saying us that there is 50% chance that the particle went through slit 1 and 50% chance that the particle went through slit 2, but rather that there was something PHYSICAL that went through both, in the same way as a classical EM wave goes through both and doesn't surprise us when it generates interference patterns. The classical EM wave that goes through both slits is NOT to be seen as 50% chance that the lightbeam goes through the first hole, and 50% chance that the lightbeam goes through the second hole, and in the same way, the wavefunction at that point can also not be seen like that.

Now, saying that "it still might be a probability density, but not in the probability space in which we are looking at our results" is an equivalent statement to "it wasn't, after all, a probability distribution", because after all, things are a probability distribution only if they are part of the one and only "global" probability space of events.

ALL quantum weirdness reduces to the fact that quantum-mechanical superposition is NOT always statistical mixture. This is shown here in the double-slit experiment, but it is also the case in more sophisticated situations, including EPR experiments.

It is, for me, the basic motivation to look upon quantum theory through MWI goggles in fact: there, it is obvious that superposition is not statistical mixture ; as such, this view avoids one to fall into the most serious trap, IMO, in looking at quantum theory: namely to interpret too lightly a superposition as a statistical mixture. It is also the reason why MWI is fundamentally different from the "universe of possibilities" that might be postulated in a random but classical universe, and many people don't seem to realize this.
 
  • #150
> it is obvious that superposition is not statistical mixture

Right. In the way you refer to this, this is clearly correct and I see what you are saying. It would be foolish to argue on this. I completely understand the different between superposition of wavefunction vs incoherent mixtures of classical probability distributions.

But we at least seem to disagree on how to resolve this and what conclusions to draw, or part of the problem is I think that _maybe_ we need terminology and maybe we misunderstand each other, because were both trying to extend the current description.

If we are going right by the book I would say that the Kolmogorov probabilitiy theory and it's axioms are an imperfect choice of axioms to describe nature. The most severe objection is howto attach the probability space to observation. This space is (in standard QM) not measured. Instead there are a lot of silly ideas of infinite measurements. This is one problem, but there are more of them.

Maybe I caused the confusion, but IMO there difference lies not in probability theory mathematically, but in the application of the axioms to reality. It depends on how you see it. As you know there has been philosophical debate of the meaning of probability, having nothing to do with quantum mechanics. I'm relating to this as well.

Now, saying that "it still might be a probability density, but not in the probability space in which we are looking at our results" is an equivalent statement to "it wasn't, after all, a probability distribution", because after all, things are a probability distribution only if they are part of the one and only "global" probability space of events.

You can put it that way. We are getting closer.

What I am suggesting is that there never exists a certain probability space, because the probability space is grown out of the retained history. But one can actually still define a conditional probability, that is defined, not as the predicted relative frequency of an event - because this can never be known, only guessed - rather this conditional probability is technically and expectation of a probability, based on an expected probability space. This is becauase the set of events and their relations are generally uncertain and changing.

I guess what I am saying is that in the way I try to see things, I see no such thing as a definite probability, for the reason that neither the probability itself, nor the probability space can be measured with infinite precision. One way to appreciate this is if you think that the observers ability to retain information is limited.

My standpoint has originated from trying to understand the process of inducing a probabiltiy space from data. This is related to datacompression, but also to responsetimes to change the estimates time. In essence the probability space is under constant "drift", as the observers memory is remodeled. The dynamics of the remodelling is like a movie beeing played, that is projected from the environement. The observer can never ever determined a exact probability of an external event, it can only deduce subjective expectations of effective proabilitties - which is better see as a bet distribution in a gaming perspective.

To not mix up language, we can eithre forget about probabilities and invent new lables for it. But the thing is that it will still end up closely related to a probability theory, just a relational one.

I magine that even space is structures and are spontaneosuly formed in observers memory (and since I think Zurek's idea that "What the observer knows is inseparable from what the observer is" is dead on), this is also responsible for "physical structure formation". And in this context, the particle zoo and the spacetime itself should be treated in a in principle equal basis.

Sometimes I wonder if the different universes the MWI people talk about are either the set of all possible "images of universes projected on observers" considering all possible "observers"? If so, we may be closer than what it seems. If that is so, then what I am saying is the the different universes do interact. but calling the universes is a very strange terminology IMO - I would call _that_ the projection of the unknown(some may call this the universe) to the observer. Wether this match is correct or not, I am not entirely sure.

I guess when all of these ideas are worked on, testable predictions will come and then it's probabably also easier to compare them? What I envision, is not just an interpretation, when it's finished I expect a lot of things to pop out. But it seems not so many people are actually spending time on these things, otherwise it's a mystery why not more has been accomplished.

/Fredrik
 

Similar threads

  • · Replies 19 ·
Replies
19
Views
686
Replies
51
Views
6K
  • · Replies 117 ·
4
Replies
117
Views
12K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 108 ·
4
Replies
108
Views
11K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 54 ·
2
Replies
54
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 47 ·
2
Replies
47
Views
6K
  • · Replies 5 ·
Replies
5
Views
2K