Matrix Notation for potential in Schrodinger Equation

Click For Summary

Discussion Overview

The discussion revolves around the matrix notation for potential in the context of the time-dependent Schrödinger equation (TDSE). Participants explore the representation of potentials as matrix elements and seek examples of these representations, particularly in relation to transition amplitudes between energy eigenstates.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant seeks clarification on the form of the potential matrix element ##V_{nk} \equiv ##, specifically asking for examples beyond the quantum harmonic oscillator and particle-in-a-box scenarios.
  • Another participant suggests considering an infinite well with a non-trivial potential function ##V(x)##, explaining how to derive the components of the potential matrix in a discrete basis.
  • There is a discussion about the implications of choosing eigenfunctions that are not the actual eigenfunctions of the system, leading to non-diagonal entries in the potential matrix ##V_{nm}##.
  • Participants mention the photoeffect as an example of applying first-order perturbation theory, although the details of this example are not fully explored in the thread.
  • One participant expresses confusion about the non-time-dependent amplitude related to the perturbation potential and its role in mixing unperturbed eigenstates.
  • Another participant discusses the arbitrary nature of the choice of eigenfunctions and emphasizes the importance of the underlying Hilbert space and operator algebra.

Areas of Agreement / Disagreement

Participants generally agree on the importance of the matrix representation of potentials and the implications of using specific eigenfunctions. However, there is no consensus on the best examples or methods for representing the potential matrix, and some aspects of the discussion remain unresolved.

Contextual Notes

Participants note that the potential does not need to be analytic, only integrable, and that the discussion involves various mathematical perspectives that may not be fully reconciled.

skynelson
Messages
57
Reaction score
4
I'm working on the time-dependent Schrödinger equation, and come across something I don't understand regarding notation, which is not specific to TDSE but the Schrödinger formalism in general. Let's say we have a non-trivial potential. There is a stage in the development of the TDSE where we write the coefficient for the ## n##th energy eigenstate as
$$ \frac{\partial c_n(t)}{\partial t} = \frac{-i}{\hbar} \sum_k c_k(t) <n|\hat{V}(t)|k>e^{i\omega_{nk}t},$$
in other words, the time dependence of the ##n##th coefficient depends on each of the other ##k## coefficients as well as the potential, ##<n|\hat{V}(t)|k>##.

My question is about the potential, ##V_{nk} \equiv <n|\hat{V}(t)|k>##. This is a matrix element representing a transition amplitude between the ##k##th energy eigenstate and the ##n##th energy eigenstate.

Can somebody please provide an example of the form this would explicitly take? It seems there are notoriously few potentials that are analytically solvable in the Schrödinger equation, so I am having trouble understanding what ##V_{nk}## would look like. I guess you could say I am unclear on what the ##\hat{V}## matrix looks like.

One example I found is for the potential $$\hat{V}(t)=2 \hat{V} cos(\omega t).$$
However, this doesn't help since it is only explicit about the time dependence. (e.g. the result is ##<n|\hat{V}(t)|k> = 2 V_{nk} cos(\omega t)##, but I want to have an example of ##V_{nk}##.)

Thank you!
P.S. I have seen this worked out for the example of the quantum harmonic oscillator energy eigenstates, using creation and annihilation operators, but that formalism is so unique I think it would be helpful see another example as well.
P.P.S. The particle-in-a-box, again, seems not so helpful, since the form of the potential is non-analytical, just step functions.
 
  • Like
Likes   Reactions: vanhees71
Physics news on Phys.org
I'm not sure if this directly addresses your question but consider the example of an infinite well ##V=\infty## for ##x<0, x>a## but not a square well, rather you have a non trivial potential function within, ##V(x)##. As it is a finite well you may naturally choose the discrete basis for a square well, ##\psi_n(x) =\sqrt{2/a}\sin( n\pi x/a)## . The components of the potential matrix are not transition amplitudes per se. They are just, literally, the components of the potential matrix in this basis.

So now to find those components in this basis you would integrate:

\hat{V}_{nm} = \int_{0}^{a}dx \int_{0}^{a}dy \left[\psi_n^*(x)\delta(x-y)V(x)\psi_m(y)\right] = \int_{0}^{a} \psi_n^*(x)V(x)\psi_m(x) dx

The delta function occurs because the potential is localized in position space. Other operators may not be and would necessarily be functions of two variables (as with, for example Green's Functions). Viewed another way, I am taking the two variable discrete Fourier transform of the potential treated as a functional operator (with the delta function factor added).

Now I'm not being careful here about the "picture" I'm using. I leave it to you to sort that out but I hope this addresses your specific question. Note the potential need not be analytic, only integrable. Note also that it is in this context that you may try to diagonalize the potential. It is "hyper-diagonalized" in the position representation but that's not using a discrete basis. In the end this part is all linear algebra.
 
  • Like
Likes   Reactions: vanhees71, dRic2 and DrClaude
Thanks for the very clear description, jambaugh.
So it looks like we picked eigenfunctions of a convenient scenario (square well) even though they weren't eigenfunctions of the actual potential, which included a square well plus a perturbation ##V(x)##. Right? And it's because of that we end up having ##V_{nm}##. The fact that there are non-diagonal entries ##V_{nm}## indicates that the functions we chose are not the actual eigenfunctions of the system. And this scheme, with ##V_{nm}##, is an approximation scheme to get closer to the true eigenfunctions.

And thanks for the link vanhees71.
 
vanhees71 said:
An example (applying 1st-order perturbation theory) is the photoeffect (transition of an atom from a bound to a continuum/scattering state under absorption of a photon):

https://www.physicsforums.com/insights/sins-physics-didactics/
Thanks Vanhees71, it's great to see this example worked out, fascinating.
Can you tell me more about this quantity,
$$\alpha \equiv \vec{A}_0 \cdot <E,t_0|\hat{\vec{p}}(t_0)|E, t_0>$$
for the non time-dependent amplitude is the part I am having some trouble gaining an intuition for. I understand that this amplitude tells how much the perturbation potential mixes the unperturbed eigenstates together. In your example here, can it be written out explicitly? It is usually just shoved to the side in favor of the more interesting time-dependent part of the calculation.
 
I would add that the choice of eigen-functions for the square well is as you (skynelson) say a "convenient" choice. It is also arbitrary form a mathematical perspective. The thing that is primary and fundamental is the Hilbert space. (Actually it is the operator algebra over that Hilbert space but from each we can construct the other so... chicken? or egg?)

The functions of position, here the wave-functions, are just a collection of information about the vectors in that space. You can view ##\psi(x)## as a parameterized set of linear functionals from these vectors to ##\mathbb{C}##. (The Dirac delta "function" is a representative of these functionals in a formalism where we pretend the Riesz representation theorem still holds. Note that the Dirac delta function only has operational meaning when it occurs inside an integral. )
[edit] In mathematicese:
Eval_x[\psi] = \int_{-\infty}^{\infty} \delta(x-\xi)\psi(\xi) d\xi
(realize that this defines ##\delta## and not the Eval functional.) [end edit]

Thinking in these terms, purely (albeit advanced) linear-algebraic terms, makes the resolution much more apparent. It took me a while to understand this point but it has made understanding the physics much easier.
 
  • Like
Likes   Reactions: vanhees71
skynelson said:
Thanks Vanhees71, it's great to see this example worked out, fascinating.
Can you tell me more about this quantity,
$$\alpha \equiv \vec{A}_0 \cdot <E,t_0|\hat{\vec{p}}(t_0)|E, t_0>$$
for the non time-dependent amplitude is the part I am having some trouble gaining an intuition for. I understand that this amplitude tells how much the perturbation potential mixes the unperturbed eigenstates together. In your example here, can it be written out explicitly? It is usually just shoved to the side in favor of the more interesting time-dependent part of the calculation.
You mean the diagonal elements of the matrix. This is the 1st-order perturbative contribution to an overall phase factor. In terms of Feynman diagrams it's a "disconnected vacuum contribution" which cancels by the normalization.
 

Similar threads

  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 6 ·
Replies
6
Views
2K