Quantum Tunneling Wave Packets: Explained

In summary: This does not seem to be what is happening in most cases. I think you are misunderstanding how the wave function works. If you accept the wave function is the probability of finding a particle at a certain location, then this makes more sense in both contexts, because it explains particles as simply some kind of "coagulation" of the waves, which means that everything can be thought of us simply the density of a particular frequency at any point in space which create the illusion of a finite
  • #1
junglebeast
515
2
Wave packets / the wave function is described as the probability density function of a particle, implying that the particle exists exactly at any 1 location at a time according to its associated wave function. This does not make sense to me on many levels, and it seems inconsistent with quantum tunneling: in quantum tunneling, a wave packet is partially emitted through a barrier, effectively splitting the wave function into a reflected and emitted components. But if the wave function is truly the probability density function for some particle, then that would mean the particle must be jumping back and forth between the 2 disparate pieces of its wave function, which would mean we would often be observing particles that disappear and reappear all around us. It also does not really clarify the wave particle duality issue either. If, on the other hand, I accept that the wave function is the probability of finding a particle at that location, then this makes more sense in both contexts, because it explains particles as simply some kind of "coagulation" of the waves, which means that everything can be thought of us simply the density of a particular frequency at any point in space which create the illusion of a finite set of particles due to cohesive abilities of the waves.

Also, how is the wave function different for different kinds of particles? And is it ever possible for a quantum tunneling to create a different type of particle across the barrier?
 
Last edited:
Physics news on Phys.org
  • #2
Well, first, the wave function isn't the probability density function. |psi|^2 is.

Macroscopic objects have a quite definite location, and do not tunnel to any appreciable extent. This is all in any introductory textbook.

The wave function is different for different particles depending on whether they're bosons or fermions. In the former case it's defined as being symmetric (does not change sign) when the coordinates of two identical particles are interchanged, in the latter case it's anti-symmetric (changes sign). The wave function also changes with mass. But any other changes to the wave function with particle 'type' depends on whether or not you include those properties in the Hamiltionian.
 
  • #3
junglebeast said:
in quantum tunneling, a wave packet is partially emitted through a barrier, effectively splitting the wave function into a reflected and emitted components. But if the wave function is truly the probability density function for some particle, then that would mean the particle must be jumping back and forth between the 2 disparate pieces of its wave function, which would mean we would often be observing particles that disappear and reappear all around us.

I think I understand what kind of image you have in mind here, where a wavefunction reflects off a wall, but part of it is transmitted. Indeed, the particle could be on either side of the wall. The problem in your statement is that we do not know which side of the wall it is! The only way to know that, would be to make a measurement. If we measure the particle, and we see that it is on the left of the wall, then it is on the left. Period. When you measure the particle, the wavefunction collapses into a spike around the position of the particle. The wavefunction to the right of the wall is now zero, and the particle is definitely on the left.For example: The wavefunction might look like this after it has reflected (and partially gone through) the wall:
2dl0nwm.jpg


When you now measure the particle, there is a large probability that you measure it at A, and a slightly smaller (but non-zero) probability that you measure it at B.

Let's say we measured it at A. The wavefunction after measuring now looks like this:
vhnk0.jpg


If we would measure the particle again (within reasonable time) we would expect to measure it again at A, and not at B. The wavefunction is zero at point B now, and therefore so is the probability of measuring it there.
 
Last edited:
  • #4
Nick,

I understand what you are saying. You're agreeing with alxm that "|psi|^2 is the probability of the particle being at this position."

So far it seems that any quantum experiment would be just as easily explained by the following different interpretation:

"|psi|^2 is the probability of a particle being measured at this position"

Due to collapse of the wave function, it seems that the latter would appear to give the results of the first interpretation under nearly all measurable circumstances. Is there any specific evidence to show the latter is not true?
 
  • #5
Do you mean that there are actually two particles (or even more), one at A and one at B? If so, one could easily verify that there aren't two particles by measuring both at A and at B. Only one measurement will yield a result.
 
  • #6
But if the wave function is truly the probability density function for some particle, then that would mean the particle must be jumping back and forth between the 2 disparate pieces of its wave function, which would mean we would often be observing particles that disappear and reappear all around us.


That can happen if the Hamiltonian couples the two states. The two states then have different energies and then you have two energy eigenstates are two orthogonal linear combinations of the two states.
 
  • #7
Nick89 said:
Do you mean that there are actually two particles (or even more), one at A and one at B? If so, one could easily verify that there aren't two particles by measuring both at A and at B. Only one measurement will yield a result.

That is correct. But I can't do this experiment. From Wikipedia's page,

If conditions are right, amplitude from a traveling wave, incident onto medium type 2 from medium type 1, can "leak through" medium type 2 and emerge as a traveling wave in the second region of medium type 1 on the far side. If the second region of medium type 1 is not present, then the traveling wave incident on medium type 2 is totally reflected, although it does penetrate into medium type 2 to some extent. Depending on the wave equation being used, the leaked amplitude is interpreted physically as traveling energy or as a traveling particle

This is a very different explanation from what you and alxm are saying. The Wikipedia explanation makes a lot more sense to me. The difference is that particles can be viewed as an illusory phenomena created by wave packets. It explains why particles can be created and destroyed, it explains why the wave function would collapse as it does (essentially a form of coagulation), and it allows a very simplistic representation of the entire universe as, essentially, a single scalar field of this wave-stuff.

In contrast, the model that you are proposing has a number of issues. First, it does not explain particles as being something that can be created by waves; rather, particles are considered a fundamental thing which somehow have a unique wave function associated with each individual particle. This requires a very complex representation of the universe that has dimensionality equal to the number of particles; ie, the universe would be described as n-scalar fields where n is the number of particles. That does not ring true to me. Second, it does not explain why the wave function would collapse as it does. Whereas a coagulative force of some kind could explain why a single wave packet might collapse, in your explanation, a single wave packet could be split into multiple wave packets that are miles apart, and when one collapses the other collapses. Perhaps that's really what happens, but I can't accept that without evidence.

The reason I'm curious about this is because unless someone specifically tries to measure the particle on both sides of a barrier after tunneling, all the OTHER experiments would have the same outcome under both models. Because we started with a particle model of physics, it is natural to see how someone designing the theory could have made this slight mistake in interpretation. So I want to find out if this exact experiment has been conducted yet, or not.
 
  • #8
Am I understanding correctly that you don't object to the wave function collapsing over a volume where it is continuously present, but you do object to the idea that a wave which has split in half and separated can suddenly collapse into one side or the other?
 
  • #9
conway said:
Am I understanding correctly that you don't object to the wave function collapsing over a volume where it is continuously present, but you do object to the idea that a wave which has split in half and separated can suddenly collapse into one side or the other?

Yeah, that's one way to put it.
 
  • #10
Then I think you believe that for a "cohesive" wave function, there is plausibly an interaction mechanism within the rules of ordinary quantum mechanics whereby the wave might converge on itself in order to appear at one point. Without any magical "collapse". Whereas for a wave that splits in half and separates over a period of time, you cannot fathom any such mechanism.
In other words, you want the "collapse of the wave function" to be an ordinary process which can be explained by following the details of the wave/detector interactions. Am I reading you correctly?
 
  • #11
The 'collapse' of the wave-function is just a weird Copenhagen-interpretation way of looking at things that makes the false assumption that a measurement is performed independently of the system being measured.

In reality two interacting systems cannot be separated, so I see no reason to believe wave functions ever truly 'collapse' in the Copenhagen sense.
 
  • #12
alxm said:
In reality two interacting systems cannot be separated, so I see no reason to believe wave functions ever truly 'collapse' in the Copenhagen sense.

The OP has given the straightforward example of a particle impinging on a barrier, so there is a clear separation between two components of the incident wave: the reflected wave and the transmitted wave. After a time, a particle is detected somewhere...either on the transmitted side or the reflected side.

Presumably up to the moment of detection, the wave function existed on both sides of the barrier. At the moment the particle is detected, what happens to that portion of the wave function which is far away?

I'm not a great fan of the "collapse of the wave function" but I'd be interested if you can explain this.
 
  • #13
conway said:
Presumably up to the moment of detection, the wave function existed on both sides of the barrier. At the moment the particle is detected, what happens to that portion of the wave function which is far away?

I'm not a great fan of the "collapse of the wave function" but I'd be interested if you can explain this.

My point was, your wave function is describing an isolated particle. But you're performing a measurement on it, so therefore it is at best an approximation. A proper description would have to include the 'measuring' system as well. If your particle is in a superposition of two different states, then the result will be that your measuring system will also be a superposition of two states, entangled with your 'measured' system.

What Bohr was essentially doing was trying to save the classical idea of measurement being independent of the system. So you assume the single-wave function description is okay, and it 'collapses' into the measured value. It's essentially an approximation - the measuring system is classical and the 'measured' system is quantum.

While I'm generally fairly uninterested with interpretations, I hold the view (shared by many) that the apparent 'collapse' is a result of decoherence as you move to the macroscopic scale (wherein you recover the classical idea of 'measurement'), and that 'measurement' in the classical sense is essentially meaningless at the quantum scale. Which isn't to say it isn't a useful 'approximate' way of thinking about things, it's just not what's actually going on - the wave function does not truly 'collapse'.
 
  • #14
alxm said:
While I'm generally fairly uninterested with interpretations, I hold the view (shared by many) that the apparent 'collapse' is a result of decoherence as you move to the macroscopic scale (wherein you recover the classical idea of 'measurement'), and that 'measurement' in the classical sense is essentially meaningless at the quantum scale. Which isn't to say it isn't a useful 'approximate' way of thinking about things, it's just not what's actually going on - the wave function does not truly 'collapse'.

But the collapse does have real effects at the quantum level...it's the very staple of quantum computing
 
  • #15
alxm said:
The 'collapse' of the wave-function is just a weird Copenhagen-interpretation way of looking at things that makes the false assumption that a measurement is performed independently of the system being measured.

In reality two interacting systems cannot be separated, so I see no reason to believe wave functions ever truly 'collapse' in the Copenhagen sense.

I believe you are misinterpreting the CI. In CI the wave functions collapse is not qualitatively different from the classical analogue of updating a classical probability distribution given new information about the system. For example prior to the drawing the distribution for all tickets in a simple lottery is uniform. After the drawing it "collapses" to 100% for the winning ticket and 0 for the rest.

It is unfortunate that the term "collapse" is used. If you replace "collapse of the wavefunction" with "update of the wavefunction" in all texts you then get the correct application of the CI.

Now you are welcome to disagree with CI but please don't misrepresent it.
 
  • #16
junglebeast said:
But the collapse does have real effects at the quantum level...it's the very staple of quantum computing

The collapse is not the key it is rather the measurement process (which then requires we update the wavefunction).

Consider the example of an analogue sorting algorithm, you cut lengths of spaghetti to the integer values you wish to sort, then you stand them vertically and tap to let them all settle on their ends. This process computation wise is "faster" than digital sorting in terms of order of computation. The critical act of computation is the dissipative dynamic process of the pieces settling to equilibrium, in short the measurement process.

I assert that the quantum computer is in fact a QM version of the analogue computer rather than of a digital computer which is why it is able to (in principle, computation wise) beat classical digital computers.
 
  • #17
The OP has given us a specific example and it really ought to be dealt with by the people who claim the "collapse of the wave function" is not a problem. You have an electron incident on a barrier. Part of the wave is reflected, part is transmitted. After a time, the electron is detected on one side or the other: say, 75% of the time on the reflected side, 25% on the transmitted.

Case 1: The question of where detection occurs is settled at the time of interaction with the barrier OR VERY SHORTLY THEREAFTER. After that it is only our knowledge which is uncertain...the electron will be found with certainty either at one side or the other.

Case 2: Until the very moment of detection, there is a 75:25 possibility that the electron will be found at EITHER of the two locations. It is only after detection occurs at A that the probability at B goes instantaneously to zero: the collapse of the wave function.

Am I understanding correctly that both James Baugh and alxm support Case 1??
 
  • #18
conway said:
The OP has given us a specific example and it really ought to be dealt with by the people who claim the "collapse of the wave function" is not a problem. You have an electron incident on a barrier. Part of the wave is reflected, part is transmitted. After a time, the electron is detected on one side or the other: say, 75% of the time on the reflected side, 25% on the transmitted.

Case 1: The question of where detection occurs is settled at the time of interaction with the barrier OR VERY SHORTLY THEREAFTER. After that it is only our knowledge which is uncertain...the electron will be found with certainty either at one side or the other.

Case 2: Until the very moment of detection, there is a 75:25 possibility that the electron will be found at EITHER of the two locations. It is only after detection occurs at A that the probability at B goes instantaneously to zero: the collapse of the wave function.

Am I understanding correctly that both James Baugh and alxm support Case 1??

If we assume an ideal barrier (cold and not recording the passage of the particle) then actually Case 2: holds. But I go with the CI of the "update" of the wave function.
 
  • #19
Thank you conway for trying to stay on track with my original question. However you are not asking the same question I was asking anymore.

This is a rendering of the wave function being partially reflected and transmitted through a barrier:
EffetTunnel.gif


An initial wave function, call it W, is split into 2 separate wave functions call them Wa and Wb, one of which is transmitted and the other reflected.

Case A: Wa and Wb can be treated as separate wave functions; it is possible for Wa to collapse and it does NOT cause collapse of Wb.

Case B: When Wa collapses, Wb simultaneously collapses because they are really still part of the same wave function, regardless of their spatial separation.

The wording on wikipedia's page seems to indicate support for Case A. So far I have only seen support for case B in the replies to this thread.

My question: is there any specific evidence or experiment (as opposed to simply quoting theory) supporting one case over the other?
 
Last edited by a moderator:
  • #20
jambaugh said:
I believe you are misinterpreting the CI. In CI the wave functions collapse is not qualitatively different from the classical analogue of updating a classical probability distribution given new information about the system. For example prior to the drawing the distribution for all tickets in a simple lottery is uniform. After the drawing it "collapses" to 100% for the winning ticket and 0 for the rest.

I understand what you're saying, but I don't see how this contradicts anything I said.

You're repeating the same underlying assumption, phrased differently. The point was: You can't have information about the system independently of the system. Say you have a system that's a superposition of two states: [tex]|\psi>_{measured} = |0>_{measured} + |1>_{measured}[/tex]. You're saying that you perform a 'measurement' and the state becomes either |0> or |1>. How do you measure a system? By interacting with it.

The result of such an interaction, when you model it entirely quantum-mechanically is an entangled state between the 'measuring' and 'measured' systems. You don't really gain any information from interacting at the quantum level. Which is why the Copenhagen Interpretation assumes classical measurement. That assumption is obviously false. In which case you have to ask where this 'collapse' supposedly comes from. That isn't to say it doesn't work, I already said it does. I'm saying it's simply not possible for it to be a true picture of what's going on, since the assumption it's based on is known to be false.

What I'm talking about is essentially what Stephen Weinberg is talking about in the http://en.wikipedia.org/wiki/Copenhagen_interpretation" on WP's "Copenhagen interpretation" page.
 
Last edited by a moderator:
  • #21
conway said:
The OP has given us a specific example and it really ought to be dealt with by the people who claim the "collapse of the wave function" is not a problem. You have an electron incident on a barrier. Part of the wave is reflected, part is transmitted. After a time, the electron is detected on one side or the other: say, 75% of the time on the reflected side, 25% on the transmitted.

Case 1: The question of where detection occurs is settled at the time of interaction with the barrier OR VERY SHORTLY THEREAFTER. After that it is only our knowledge which is uncertain...the electron will be found with certainty either at one side or the other.

Case 2: Until the very moment of detection, there is a 75:25 possibility that the electron will be found at EITHER of the two locations. It is only after detection occurs at A that the probability at B goes instantaneously to zero: the collapse of the wave function.

Am I understanding correctly that both James Baugh and alxm support Case 1??
Is this really correct? If so, then I am pretty sure I have been taught wrong...
I have always been taught that it is not the fact that we don't know the position of the particle, but the fact that we can not know the position of the particle (because it doesn't have a well defined position). You seem to say that the particle does have a well defined position (either left or right of the wall) but we simply don't know it. Isn't this some kind of hidden variable theory?


junglebeast said:
Thank you conway for trying to stay on track with my original question. However you are not asking the same question I was asking anymore.

This is a rendering of the wave function being partially reflected and transmitted through a barrier:
EffetTunnel.gif


An initial wave function, call it W, is split into 2 separate wave functions call them Wa and Wb, one of which is transmitted and the other reflected.

Case A: Wa and Wb can be treated as separate wave functions; it is possible for Wa to collapse and it does NOT cause collapse of Wb.

Case B: When Wa collapses, Wb simultaneously collapses because they are really still part of the same wave function, regardless of their spatial separation.

The wording on wikipedia's page seems to indicate support for Case A. So far I have only seen support for case B in the replies to this thread.

My question: is there any specific evidence or experiment (as opposed to simply quoting theory) supporting one case over the other?
Oh I see, I too misunderstood your question. Good question, I don't know the answer!
 
Last edited by a moderator:
  • #22
Yes, that's a good question.

I'm quite sure the conventional "theory" demands that we go with Case B. But I'm going to go out on a limb and say it's very difficult to carry out the experiment which makes the case conclusively. You set up two detectors and look for simultaneous events, indicating that the wave fragments each independently caused a detection event at the same time. The lack of coincidences is supposed to support Case B.

One problem with this experiment is it doesn't rule out what I called Case 1: the outcome being settled at the time of barrier penetration.
 
  • #23
junglebeast said:
An initial wave function, call it W, is split into 2 separate wave functions call them Wa and Wb, one of which is transmitted and the other reflected.

Wrong. It is not split into two wave functions. A single wave function is a single wave function no matter what the spatial distance is.

My question: is there any specific evidence or experiment (as opposed to simply quoting theory) supporting one case over the other?

Your case B is the correct one. And yes, there's lots of evidence. As in: Every single experiment ever done on an entangled state. An entangled state, in its simplest form, is two particles describable by one wave function, not a sum of the wave functions of two independent particles.

When a measurement (of the entangled property) is performed on one of the particles that wave function describes it 'collapses', and it thereafter separable into two wave functions for two particles that are now independent of each other. And you have knowledge of both.

In the case of a single particle that's either reflected or not, if you insist on looking it as two wave functions, then it's two wave functions entangled with respect to location. And measuring the existence of the particle at one location or the other will cause both to 'collapse' into whatever that position is. It will no longer have any probability of existing at the other location.

Also, why are you dismissive of "quoting theory"? Because honestly, it doesn't seem you've quite understood the theory yet.
 
  • #24
When a measurement (of the entangled property) is performed on one of the particles that wave function describes it 'collapses', and it thereafter separable into two wave functions for two particles that are now independent of each other. And you have knowledge of both.

Have you considered that the barrier effectively performs a "measurement" on the particle represented by the original wave function, causing it to collapse into 2 separable wave functions (packets) that are not entangled?

alxm said:
Wrong. It is not split into two wave functions. A single wave function is a single wave function no matter what the spatial distance is.

Whether you call it a "bimodal wave function" or 2 "unimodal wave functions" is just semantics. You can certainly call it two "wave packets", because the definition of a wave packet is one localized probability density function.

A single particle and it's uncertainty can be described by a single wave packet. See first paragraph here, and wiki demonstration below.
http://en.wikipedia.org/wiki/Wave_packet
http://demonstrations.wolfram.com/WavepacketForAFreeParticle/

A wave packet can be split into 2 wave packets by a barrier. That is established by the tunneling page I linked and associated simulation rendering which makes this quite apparent.

So, we have:
Fact 1: A free particle can be represented by a wave packet
Fact 2: A wave packet can interact with a barrier to split into 2 separate wave packets

By simple logic, it follows directly from these 2 facts that "A particle may be split into 2 particles by interaction with a barrier." And hence, these two resulting split wave packets will not be bound to each other.
 
  • #25
You can make a small opening on both sides. There is then a small probability per unit time that the particle escapes through either hole. The probability that the escaping patcle will be detected at some position will then have an interference pattern which indicates that you had a single wavefunction.
 
  • #26
alxm said:
Your case B is the correct one. And yes, there's lots of evidence. As in: Every single experiment ever done on an entangled state. An entangled state, in its simplest form, is two particles describable by one wave function, not a sum of the wave functions of two independent particles.

If an entangled state as you describe it involves two particles, then I don't see that we're dealing with that in the present example. It's just one electron fired at a barrier.

In the case of a single particle that's either reflected or not, if you insist on looking it as two wave functions, then it's two wave functions entangled with respect to location. And measuring the existence of the particle at one location or the other will cause both to 'collapse' into whatever that position is. It will no longer have any probability of existing at the other location.

So the question is: how has that been verified experimentally?
 
  • #27
conway said:
So the question is: how has that been verified experimentally?

Couldn't tunneling of states (as opposed to particles), which after all is more general, by viewed as evidence of this?
There are a few well-known systems (a Josephson junction would be one example) where the system is initially trapped in an insulated potential well but can tunnel from this (nearly) dissipationless state to a dissipative state where the system is subject to very rapid decoherence (and is therefore "measured"). This system has e.g. been used as a qubit.

As far as I know no one has ever observed a situation where the systems BOTH stays in the non-dissipative state AND stays in the well.
 
  • #28
junglebeast said:
Have you considered that the barrier effectively performs a "measurement" on the particle represented by the original wave function, causing it to collapse into 2 separable wave functions (packets) that are not entangled?

No, it does not perform a 'measurement'. There is no need to bring in the Copenhagen interpretation into this problem. Go read up on scattering theory.

Whether you call it a "bimodal wave function" or 2 "unimodal wave functions" is just semantics. You can certainly call it two "wave packets", because the definition of a wave packet is one localized probability density function.

You cannot treat it as such. If you have one localized wave function (wave packet) at one point in time, which then hits a barrier and disperses, the two (or however many) dispersal nodes you get will be interdependent. They do not form a linear superposition of states, as the case would be for seperate, independent wave packets.

A wave packet can be split into 2 wave packets by a barrier. That is established by the tunneling page I linked and associated simulation rendering which makes this quite apparent.

Yes, but you don't seem to have done the math. The fact that a wave function has two separate nodes does not make it two separate wave functions representing two independent states of two independent particles.

By simple logic, it follows directly from these 2 facts that "A particle may be split into 2 particles by interaction with a barrier." And hence, these two resulting split wave packets will not be bound to each other.

Show it mathematically then.
 
  • #29
junglebeast said:
Thank you conway for trying to stay on track with my original question. However you are not asking the same question I was asking anymore.

...
My question: is there any specific evidence or experiment (as opposed to simply quoting theory) supporting one case over the other?

You can't separate these issues. Via CI the wave function is a theoretical construct used to describe the behavior of the physical particle. It is the theory which dictates how the wave function behaves and then it is the wave function which tells us what the theory predicts about the physical behavior of the particle.

In short we don't measure wave functions they are not observable. In CI they are not physical. In other interpretations they are given different ontological status but are none-the-less still not observable.

The theory dictates case 2 w.r.t. the wave function and the probabilistic behavior of the particle is as far as experiments have been able to determine consistent with the theory. But the theory cannot and indeed explicitly prohibits (under CI) any ontological descriptions of the physical particle except in terms of what is/was/will be/might be... measured.

Quantum theory is not based on an ontological model like classical theory but rather a distinctly phenomenological model.
 
  • #30
conway said:
So the question is: how has that been verified experimentally?

Through the experimental verifications of quantum mechanics.
 
  • #31
f95toli said:
Couldn't tunneling of states (as opposed to particles), which after all is more general, by viewed as evidence of this?
There are a few well-known systems (a Josephson junction would be one example) where the system is initially trapped in an insulated potential well but can tunnel from this (nearly) dissipationless state to a dissipative state where the system is subject to very rapid decoherence (and is therefore "measured"). This system has e.g. been used as a qubit.

As far as I know no one has ever observed a situation where the systems BOTH stays in the non-dissipative state AND stays in the well.

I'm sorry, but I'm not able to comment on your example.
 
  • #33
conway said:
I'm sorry, but I'm not able to comment on your example.

That is a shame, because it it is a good example:smile:

But the basic idea is quite simple.
Imagine a particle trapped in a well, to the left there is an infinite wall; to the right a barrier with some finite height (meaning there is a non-zero tunneling probability). Hence, to start with we have a "particle in a box" situation, the wavefunction is localized to the well,
Now, in addition to this we assume that if the particle leaves the well by tunneling to the right it goes into a state with a lot of dissipation; i.e. it becomes a "classical particle" and is instantly "measured".

Now, this turns out to be a good description of several REAL systems and has been studied experimentally for over 25 years (there are whole books about this, see e.g. Takagi's "Macroscopic Quantum Tunneling" which is slightly odd in places, but otherwise quite good).

The reason why I thought this might be a good example is that the tunneling is a very well defined process; in some of these systems it is quite literally a SINGLE state (described by a single-particle wavefunction) that tunnels (as opposed to ensembles of particles and other messy situations) which in turn means that it is an ideal "toy system"; both theoretically and experimentally (and the experimental date agree very well with theory).
 
  • #34
f95toli said:
That is a shame, because it it is a good example:smile:

I'm sure it is and I appreciate your elaboration on it. I hope you don't think the idea of tunneling itself was in question, by the way. These discussions have a way of going off in a hundred different directions. And they often end with someone weighing in with the important revelation that "quantum mechanics is the most accurate theory ever devised by man".

I still think the OP had a reasonable question in terms of what kind of experimental verification could be done for the oft-cited thought experiment in question. I'm reluctant to get any further into the correspondence because it seems to have taken a vaguely unpleasant turn.
 
  • #35
jambaugh said:
You can't separate these issues. Via CI the wave function is a theoretical construct used to describe the behavior of the physical particle. In short we don't measure wave functions they are not observable. In CI they are not physical. In other interpretations they are given different ontological status but are none-the-less still not observable.

In this case, the question has a physical meaning and so the question can be answered via direct measurements without resorting to wave packet theory. ie, take a closed system containing a barrier and a free particle. Fire the free particle at the barrier with no holes, then put a detect on both sides of the barrier. If a particle is detected on both sides of the barrier, then this proves that a particle can split itself into non-entangled reflected and transmitted particles via quantum tunneling, which effectively answers the original question without resorting to wave theory
 

Similar threads

  • Quantum Physics
2
Replies
36
Views
1K
Replies
14
Views
1K
Replies
3
Views
839
Replies
1
Views
627
Replies
8
Views
1K
  • Quantum Physics
Replies
9
Views
1K
Replies
32
Views
2K
Replies
6
Views
820
  • Quantum Physics
Replies
1
Views
623
Replies
7
Views
973
Back
Top