# Quantum tunneling

Wave packets / the wave function is described as the probability density function of a particle, implying that the particle exists exactly at any 1 location at a time according to its associated wave function. This does not make sense to me on many levels, and it seems inconsistent with quantum tunneling: in quantum tunneling, a wave packet is partially emitted through a barrier, effectively splitting the wave function into a reflected and emitted components. But if the wave function is truly the probability density function for some particle, then that would mean the particle must be jumping back and forth between the 2 disparate pieces of its wave function, which would mean we would often be observing particles that disappear and reappear all around us. It also does not really clarify the wave particle duality issue either. If, on the other hand, I accept that the wave function is the probability of finding a particle at that location, then this makes more sense in both contexts, because it explains particles as simply some kind of "coagulation" of the waves, which means that everything can be thought of us simply the density of a particular frequency at any point in space which create the illusion of a finite set of particles due to cohesive abilities of the waves.

Also, how is the wave function different for different kinds of particles? And is it ever possible for a quantum tunneling to create a different type of particle across the barrier?

Last edited:

## Answers and Replies

alxm
Well, first, the wave function isn't the probability density function. |psi|^2 is.

Macroscopic objects have a quite definite location, and do not tunnel to any appreciable extent. This is all in any introductory textbook.

The wave function is different for different particles depending on whether they're bosons or fermions. In the former case it's defined as being symmetric (does not change sign) when the coordinates of two identical particles are interchanged, in the latter case it's anti-symmetric (changes sign). The wave function also changes with mass. But any other changes to the wave function with particle 'type' depends on whether or not you include those properties in the Hamiltionian.

in quantum tunneling, a wave packet is partially emitted through a barrier, effectively splitting the wave function into a reflected and emitted components. But if the wave function is truly the probability density function for some particle, then that would mean the particle must be jumping back and forth between the 2 disparate pieces of its wave function, which would mean we would often be observing particles that disappear and reappear all around us.
I think I understand what kind of image you have in mind here, where a wavefunction reflects off a wall, but part of it is transmitted. Indeed, the particle could be on either side of the wall. The problem in your statement is that we do not know which side of the wall it is! The only way to know that, would be to make a measurement. If we measure the particle, and we see that it is on the left of the wall, then it is on the left. Period. When you measure the particle, the wavefunction collapses into a spike around the position of the particle. The wavefunction to the right of the wall is now zero, and the particle is definitely on the left.

For example: The wavefunction might look like this after it has reflected (and partially gone through) the wall:

When you now measure the particle, there is a large probability that you measure it at A, and a slightly smaller (but non-zero) probability that you measure it at B.

Let's say we measured it at A. The wavefunction after measuring now looks like this:

If we would measure the particle again (within reasonable time) we would expect to measure it again at A, and not at B. The wavefunction is zero at point B now, and therefore so is the probability of measuring it there.

Last edited:
Nick,

I understand what you are saying. You're agreeing with alxm that "|psi|^2 is the probability of the particle being at this position."

So far it seems that any quantum experiment would be just as easily explained by the following different interpretation:

"|psi|^2 is the probability of a particle being measured at this position"

Due to collapse of the wave function, it seems that the latter would appear to give the results of the first interpretation under nearly all measurable circumstances. Is there any specific evidence to show the latter is not true?

Do you mean that there are actually two particles (or even more), one at A and one at B? If so, one could easily verify that there aren't two particles by measuring both at A and at B. Only one measurement will yield a result.

But if the wave function is truly the probability density function for some particle, then that would mean the particle must be jumping back and forth between the 2 disparate pieces of its wave function, which would mean we would often be observing particles that disappear and reappear all around us.

That can happen if the Hamiltonian couples the two states. The two states then have different energies and then you have two energy eigenstates are two orthogonal linear combinations of the two states.

Do you mean that there are actually two particles (or even more), one at A and one at B? If so, one could easily verify that there aren't two particles by measuring both at A and at B. Only one measurement will yield a result.
That is correct. But I can't do this experiment. From Wikipedia's page,

If conditions are right, amplitude from a traveling wave, incident onto medium type 2 from medium type 1, can "leak through" medium type 2 and emerge as a traveling wave in the second region of medium type 1 on the far side. If the second region of medium type 1 is not present, then the traveling wave incident on medium type 2 is totally reflected, although it does penetrate into medium type 2 to some extent. Depending on the wave equation being used, the leaked amplitude is interpreted physically as traveling energy or as a traveling particle
This is a very different explanation from what you and alxm are saying. The Wikipedia explanation makes a lot more sense to me. The difference is that particles can be viewed as an illusory phenomena created by wave packets. It explains why particles can be created and destroyed, it explains why the wave function would collapse as it does (essentially a form of coagulation), and it allows a very simplistic representation of the entire universe as, essentially, a single scalar field of this wave-stuff.

In contrast, the model that you are proposing has a number of issues. First, it does not explain particles as being something that can be created by waves; rather, particles are considered a fundamental thing which somehow have a unique wave function associated with each individual particle. This requires a very complex representation of the universe that has dimensionality equal to the number of particles; ie, the universe would be described as n-scalar fields where n is the number of particles. That does not ring true to me. Second, it does not explain why the wave function would collapse as it does. Whereas a coagulative force of some kind could explain why a single wave packet might collapse, in your explanation, a single wave packet could be split into multiple wave packets that are miles apart, and when one collapses the other collapses. Perhaps that's really what happens, but I can't accept that without evidence.

The reason I'm curious about this is because unless someone specifically tries to measure the particle on both sides of a barrier after tunneling, all the OTHER experiments would have the same outcome under both models. Because we started with a particle model of physics, it is natural to see how someone designing the theory could have made this slight mistake in interpretation. So I want to find out if this exact experiment has been conducted yet, or not.

Am I understanding correctly that you don't object to the wave function collapsing over a volume where it is continuously present, but you do object to the idea that a wave which has split in half and separated can suddenly collapse into one side or the other?

Am I understanding correctly that you don't object to the wave function collapsing over a volume where it is continuously present, but you do object to the idea that a wave which has split in half and separated can suddenly collapse into one side or the other?
Yeah, that's one way to put it.

Then I think you believe that for a "cohesive" wave function, there is plausibly an interaction mechanism within the rules of ordinary quantum mechanics whereby the wave might converge on itself in order to appear at one point. Without any magical "collapse". Whereas for a wave that splits in half and separates over a period of time, you cannot fathom any such mechanism.
In other words, you want the "collapse of the wave function" to be an ordinary process which can be explained by following the details of the wave/detector interactions. Am I reading you correctly?

alxm
The 'collapse' of the wave-function is just a weird Copenhagen-interpretation way of looking at things that makes the false assumption that a measurement is performed independently of the system being measured.

In reality two interacting systems cannot be separated, so I see no reason to believe wave functions ever truly 'collapse' in the Copenhagen sense.

In reality two interacting systems cannot be separated, so I see no reason to believe wave functions ever truly 'collapse' in the Copenhagen sense.
The OP has given the straightforward example of a particle impinging on a barrier, so there is a clear separation between two components of the incident wave: the reflected wave and the transmitted wave. After a time, a particle is detected somewhere...either on the transmitted side or the reflected side.

Presumably up to the moment of detection, the wave function existed on both sides of the barrier. At the moment the particle is detected, what happens to that portion of the wave function which is far away?

I'm not a great fan of the "collapse of the wave function" but I'd be interested if you can explain this.

alxm
Presumably up to the moment of detection, the wave function existed on both sides of the barrier. At the moment the particle is detected, what happens to that portion of the wave function which is far away?

I'm not a great fan of the "collapse of the wave function" but I'd be interested if you can explain this.
My point was, your wave function is describing an isolated particle. But you're performing a measurement on it, so therefore it is at best an approximation. A proper description would have to include the 'measuring' system as well. If your particle is in a superposition of two different states, then the result will be that your measuring system will also be a superposition of two states, entangled with your 'measured' system.

What Bohr was essentially doing was trying to save the classical idea of measurement being independent of the system. So you assume the single-wave function description is okay, and it 'collapses' into the measured value. It's essentially an approximation - the measuring system is classical and the 'measured' system is quantum.

While I'm generally fairly uninterested with interpretations, I hold the view (shared by many) that the apparent 'collapse' is a result of decoherence as you move to the macroscopic scale (wherein you recover the classical idea of 'measurement'), and that 'measurement' in the classical sense is essentially meaningless at the quantum scale. Which isn't to say it isn't a useful 'approximate' way of thinking about things, it's just not what's actually going on - the wave function does not truly 'collapse'.

While I'm generally fairly uninterested with interpretations, I hold the view (shared by many) that the apparent 'collapse' is a result of decoherence as you move to the macroscopic scale (wherein you recover the classical idea of 'measurement'), and that 'measurement' in the classical sense is essentially meaningless at the quantum scale. Which isn't to say it isn't a useful 'approximate' way of thinking about things, it's just not what's actually going on - the wave function does not truly 'collapse'.
But the collapse does have real effects at the quantum level...it's the very staple of quantum computing

jambaugh
Gold Member
The 'collapse' of the wave-function is just a weird Copenhagen-interpretation way of looking at things that makes the false assumption that a measurement is performed independently of the system being measured.

In reality two interacting systems cannot be separated, so I see no reason to believe wave functions ever truly 'collapse' in the Copenhagen sense.
I believe you are misinterpreting the CI. In CI the wave functions collapse is not qualitatively different from the classical analogue of updating a classical probability distribution given new information about the system. For example prior to the drawing the distribution for all tickets in a simple lottery is uniform. After the drawing it "collapses" to 100% for the winning ticket and 0 for the rest.

It is unfortunate that the term "collapse" is used. If you replace "collapse of the wavefunction" with "update of the wavefunction" in all texts you then get the correct application of the CI.

Now you are welcome to disagree with CI but please don't misrepresent it.

jambaugh
Gold Member
But the collapse does have real effects at the quantum level...it's the very staple of quantum computing
The collapse is not the key it is rather the measurement process (which then requires we update the wavefunction).

Consider the example of an analogue sorting algorithm, you cut lengths of spaghetti to the integer values you wish to sort, then you stand them vertically and tap to let them all settle on their ends. This process computation wise is "faster" than digital sorting in terms of order of computation. The critical act of computation is the dissipative dynamic process of the pieces settling to equilibrium, in short the measurement process.

I assert that the quantum computer is in fact a QM version of the analogue computer rather than of a digital computer which is why it is able to (in principle, computation wise) beat classical digital computers.

The OP has given us a specific example and it really ought to be dealt with by the people who claim the "collapse of the wave function" is not a problem. You have an electron incident on a barrier. Part of the wave is reflected, part is transmitted. After a time, the electron is detected on one side or the other: say, 75% of the time on the reflected side, 25% on the transmitted.

Case 1: The question of where detection occurs is settled at the time of interaction with the barrier OR VERY SHORTLY THEREAFTER. After that it is only our knowledge which is uncertain...the electron will be found with certainty either at one side or the other.

Case 2: Until the very moment of detection, there is a 75:25 possibility that the electron will be found at EITHER of the two locations. It is only after detection occurs at A that the probability at B goes instantaneously to zero: the collapse of the wave function.

Am I understanding correctly that both James Baugh and alxm support Case 1??

jambaugh
Gold Member
The OP has given us a specific example and it really ought to be dealt with by the people who claim the "collapse of the wave function" is not a problem. You have an electron incident on a barrier. Part of the wave is reflected, part is transmitted. After a time, the electron is detected on one side or the other: say, 75% of the time on the reflected side, 25% on the transmitted.

Case 1: The question of where detection occurs is settled at the time of interaction with the barrier OR VERY SHORTLY THEREAFTER. After that it is only our knowledge which is uncertain...the electron will be found with certainty either at one side or the other.

Case 2: Until the very moment of detection, there is a 75:25 possibility that the electron will be found at EITHER of the two locations. It is only after detection occurs at A that the probability at B goes instantaneously to zero: the collapse of the wave function.

Am I understanding correctly that both James Baugh and alxm support Case 1??
If we assume an ideal barrier (cold and not recording the passage of the particle) then actually Case 2: holds. But I go with the CI of the "update" of the wave function.

Thank you conway for trying to stay on track with my original question. However you are not asking the same question I was asking anymore.

This is a rendering of the wave function being partially reflected and transmitted through a barrier:

An initial wave function, call it W, is split into 2 separate wave functions call them Wa and Wb, one of which is transmitted and the other reflected.

Case A: Wa and Wb can be treated as separate wave functions; it is possible for Wa to collapse and it does NOT cause collapse of Wb.

Case B: When Wa collapses, Wb simultaneously collapses because they are really still part of the same wave function, regardless of their spatial separation.

The wording on wikipedia's page seems to indicate support for Case A. So far I have only seen support for case B in the replies to this thread.

My question: is there any specific evidence or experiment (as opposed to simply quoting theory) supporting one case over the other?

Last edited by a moderator:
alxm
I believe you are misinterpreting the CI. In CI the wave functions collapse is not qualitatively different from the classical analogue of updating a classical probability distribution given new information about the system. For example prior to the drawing the distribution for all tickets in a simple lottery is uniform. After the drawing it "collapses" to 100% for the winning ticket and 0 for the rest.
I understand what you're saying, but I don't see how this contradicts anything I said.

You're repeating the same underlying assumption, phrased differently. The point was: You can't have information about the system independently of the system. Say you have a system that's a superposition of two states: $$|\psi>_{measured} = |0>_{measured} + |1>_{measured}$$. You're saying that you perform a 'measurement' and the state becomes either |0> or |1>. How do you measure a system? By interacting with it.

The result of such an interaction, when you model it entirely quantum-mechanically is an entangled state between the 'measuring' and 'measured' systems. You don't really gain any information from interacting at the quantum level. Which is why the Copenhagen Interpretation assumes classical measurement. That assumption is obviously false. In which case you have to ask where this 'collapse' supposedly comes from. That isn't to say it doesn't work, I already said it does. I'm saying it's simply not possible for it to be a true picture of what's going on, since the assumption it's based on is known to be false.

What I'm talking about is essentially what Stephen Weinberg is talking about in the http://en.wikipedia.org/wiki/Copenhagen_interpretation" [Broken] on WP's "Copenhagen interpretation" page.

Last edited by a moderator:
The OP has given us a specific example and it really ought to be dealt with by the people who claim the "collapse of the wave function" is not a problem. You have an electron incident on a barrier. Part of the wave is reflected, part is transmitted. After a time, the electron is detected on one side or the other: say, 75% of the time on the reflected side, 25% on the transmitted.

Case 1: The question of where detection occurs is settled at the time of interaction with the barrier OR VERY SHORTLY THEREAFTER. After that it is only our knowledge which is uncertain...the electron will be found with certainty either at one side or the other.

Case 2: Until the very moment of detection, there is a 75:25 possibility that the electron will be found at EITHER of the two locations. It is only after detection occurs at A that the probability at B goes instantaneously to zero: the collapse of the wave function.

Am I understanding correctly that both James Baugh and alxm support Case 1??
Is this really correct? If so, then I am pretty sure I have been taught wrong...
I have always been taught that it is not the fact that we don't know the position of the particle, but the fact that we can not know the position of the particle (because it doesn't have a well defined position). You seem to say that the particle does have a well defined position (either left or right of the wall) but we simply don't know it. Isn't this some kind of hidden variable theory?

Thank you conway for trying to stay on track with my original question. However you are not asking the same question I was asking anymore.

This is a rendering of the wave function being partially reflected and transmitted through a barrier:

An initial wave function, call it W, is split into 2 separate wave functions call them Wa and Wb, one of which is transmitted and the other reflected.

Case A: Wa and Wb can be treated as separate wave functions; it is possible for Wa to collapse and it does NOT cause collapse of Wb.

Case B: When Wa collapses, Wb simultaneously collapses because they are really still part of the same wave function, regardless of their spatial separation.

The wording on wikipedia's page seems to indicate support for Case A. So far I have only seen support for case B in the replies to this thread.

My question: is there any specific evidence or experiment (as opposed to simply quoting theory) supporting one case over the other?
Oh I see, I too misunderstood your question. Good question, I don't know the answer!

Last edited by a moderator:
Yes, that's a good question.

I'm quite sure the conventional "theory" demands that we go with Case B. But I'm going to go out on a limb and say it's very difficult to carry out the experiment which makes the case conclusively. You set up two detectors and look for simultaneous events, indicating that the wave fragments each independently caused a detection event at the same time. The lack of coincidences is supposed to support Case B.

One problem with this experiment is it doesn't rule out what I called Case 1: the outcome being settled at the time of barrier penetration.

alxm
An initial wave function, call it W, is split into 2 separate wave functions call them Wa and Wb, one of which is transmitted and the other reflected.
Wrong. It is not split into two wave functions. A single wave function is a single wave function no matter what the spatial distance is.

My question: is there any specific evidence or experiment (as opposed to simply quoting theory) supporting one case over the other?
Your case B is the correct one. And yes, there's lots of evidence. As in: Every single experiment ever done on an entangled state. An entangled state, in its simplest form, is two particles describable by one wave function, not a sum of the wave functions of two independent particles.

When a measurement (of the entangled property) is performed on one of the particles that wave function describes it 'collapses', and it thereafter separable into two wave functions for two particles that are now independent of each other. And you have knowledge of both.

In the case of a single particle that's either reflected or not, if you insist on looking it as two wave functions, then it's two wave functions entangled with respect to location. And measuring the existence of the particle at one location or the other will cause both to 'collapse' into whatever that position is. It will no longer have any probability of existing at the other location.

Also, why are you dismissive of "quoting theory"? Because honestly, it doesn't seem you've quite understood the theory yet.

When a measurement (of the entangled property) is performed on one of the particles that wave function describes it 'collapses', and it thereafter separable into two wave functions for two particles that are now independent of each other. And you have knowledge of both.
Have you considered that the barrier effectively performs a "measurement" on the particle represented by the original wave function, causing it to collapse into 2 separable wave functions (packets) that are not entangled?

Wrong. It is not split into two wave functions. A single wave function is a single wave function no matter what the spatial distance is.
Whether you call it a "bimodal wave function" or 2 "unimodal wave functions" is just semantics. You can certainly call it two "wave packets", because the definition of a wave packet is one localized probability density function.

A single particle and it's uncertainty can be described by a single wave packet. See first paragraph here, and wiki demonstration below.
http://en.wikipedia.org/wiki/Wave_packet
http://demonstrations.wolfram.com/WavepacketForAFreeParticle/

A wave packet can be split into 2 wave packets by a barrier. That is established by the tunneling page I linked and associated simulation rendering which makes this quite apparent.

So, we have:
Fact 1: A free particle can be represented by a wave packet
Fact 2: A wave packet can interact with a barrier to split into 2 separate wave packets

By simple logic, it follows directly from these 2 facts that "A particle may be split into 2 particles by interaction with a barrier." And hence, these two resulting split wave packets will not be bound to each other.

You can make a small opening on both sides. There is then a small probability per unit time that the particle escapes through either hole. The probability that the escaping patcle will be detected at some position will then have an interference pattern which indicates that you had a single wavefunction.