Inner product of functions of continuous variable

In summary, the book says that infinite orthogonal basis of infinite dimensions can be used to represent functions of continous variable. My question is, is it okay to assume this?
  • #1
Ibraheem
51
2
I am new to quantum mechanics and I have recently been reading Shankar's book. It was all good until I reached the idea of representing functions of continouis variable as kets for example |f(x)>. The book just scraped off the definition of inner product in the discrete space case and refined it as an integral:

For example, f(x) and g(x) defined on [0,L] where both functions evaluate to zero at the end points. The inner product is
##<f(x)|g(x)> =\int_{0}^{L} f(x)g(x)dx ##
The book says that these functions are represented by infinite orthogonal basis of infinite dimensions. My question, is it okay to assume that these functions can be represented by the following infinite dimensional matrices ?
##|f(x)> \longrightarrow \begin{bmatrix} f(0) \\ f(x_1) \\.\\f(x_{k} ) \\.\\.\\ f(L)\end{bmatrix}## and ##|g(x)> \longrightarrow \begin{bmatrix} g(0) \\ g(x_1) \\.\\g(x_{k}) \\.\\.\\ g(L)\end{bmatrix}##
so that in terms of infinite basis kets:
##|f(x)>=f(0)|0> + f(x_1)|x_1>...f(x_{k})|x_{k}>+...f(L)|L>##
##|g(x)>=g(0)|0> + g(x_1)|x_1>...g(x_{k})|x_{k}>+...g(L)|L>##
so from this (which I am not sure if it is right) can we write the integral as
##<f(x)|g(x)> =\int_{0}^{L} f(x)g(x)dx##
##=(f(0)<0| + f(x_1)<x_1|...f(x_{k})<x_{k}|+...f(L)<L|)*((g(0)|0> + g(x_1)|x_1>...g(x_{k})|x_{k}>+...g(L)|L>)##
since the basis are orthogonal I am assuming it is possible to express the above as
##= (g(0)f(0)<0|0> + g(x_1)f(x_1)<x_1|x_1>...f(x_{k})g(x_k)<x_k|x_k>+...f(L)g(L)<L|L>)##
The question is, is it possible to assume that ##x_k=0+ k*dx## where k is the integer 2.3.4... and loosely expressing the integral above as
##<f(x)|g(x)> =\int_{0}^{L} f(x)g(x)dx = f(0)g(0)dx+f(dx)g(dx)dx+...f(x_k)g(x_k)(dx)...+f(L)g(L)dx##
which I think imply that
##<x_k|x_k> = dx##
which is not what the dirac delta function suggests !
The page where this is discussed is 59 of Shankar principle of Quantum Mechanics book
could some one please explain to me why this is wrong ??
 
Physics news on Phys.org
  • #2
The interval ##[0,L]## (or any real number interval that has a nonzero length) is a non-enumerably infinite set. You can't make a list of all numbers that belong to that interval, as your denotation suggests. In numerical schemes where partial differential equations (like Schrödinger's equation) are solved it's common to replace a function of real variable with a discrete set of numbers, but that is only an approximative representation of the wave function.
 
  • Like
Likes Leo1233783 and bhobba
  • #3
Okay, but if I partition the interval into small subintervals of length ##\Delta=\frac{L}{n+1}## and use an approximation to the two functions with discrete functions call it ##f_n(x)## and ##g_n(x)## where these evaluate to points at the right end of each subinterval i.e ##x_i = i\Delta## then the inner product of these two discrete functions is
##<f_n(x)|g_n(x)>=\sum_{i=1}^{n} f_n(x_i) g_n(x_i)\Delta##
if n is large and not infinite such that there will be countable large number of basis with large dimensions, would not that at least suggests
##<x_i|x_i>## gets smaller and smaller ?
 
  • #4
Ibraheem said:
Okay, but if I partition the interval into small subintervals of length ##\Delta=\frac{L}{n+1}## and use an approximation to the two functions with discrete functions call it ##f_n(x)## and ##g_n(x)## where these evaluate to points at the right end of each subinterval i.e ##x_i = i\Delta## then the inner product of these two discrete functions is
##<f_n(x)|g_n(x)>=\sum_{i=1}^{n} f_n(x_i) g_n(x_i)\Delta##
if n is large and not infinite such that there will be countable large number of basis with large dimensions, would not that at least suggests
##<x_i|x_i>## gets smaller and smaller ?

Yes - well mostly anyway.

Your use of Δ could be explained better.

Lets look at a real example and showing intuitively how the Dirac Delta comes about.

Lets have a fine lattice of positions xi. <xi|xj> = δij where δij, the Kronecker delta is 0 except when i=j where its 1. And also Σ <xi|xj> = 1 where you can take the sum over i or j but here wolog (math lingo for without loss of generality - my first year linear algebra teacher was big on that one in his proofs) I will take it as j.

Now we make the lattice so fine it goes to a continuum and here is how the Δ comes into it then. If Δ is the spacing between the lattice then you you define the now non normalized x'i = xi/√Δ. Note the square root - its critical. So you have <x'i|x'j> Δ = δij and Σ <x'i|x'j> Δ = 1. Now we will define Δx'j = Δ just so things look nicer later and you have Σ <x'i|x'j> Δxj = 1. As the lattice goes to a continuum then Δxj → 0, the ∑ goes to ∫, xi → x, xj → x', <x'i| → <x| and |x'j> → |x'>.

So you have, since we used j as our summation variable ∫<x|x'> dx' = 1. Note the <x| are now of infinite length and do not reside in a Hilbert space. To make this rigorous you need to go to what are called Rigged Hilbert Spaces (RHS's), but before embarking on that journey study distribution theory:
https://www.amazon.com/dp/0521558905/?tag=pfamazon01-20

Getting back to ∫<x|x'> dx' = 1 this is the defining condition of the Dirac Delta function δ(x-x') that so bedeviled John Von-Neumann - he could never sort it all out, in fact it took the combined efforts of three of the greatest mathematicians of the 20th century, Grothendieck (equally as great a mathematician as Von-Neumann - but not a polymath like Von-Neumann was), Schwartz and Gelfland - look them all up for some interesting reading.

So you get ∫<x|x'> dx' = ∫δ(x-x') dx' = 1 and <x|x'> = δ(x-x').

This is trickier than the normal inner product, but by being just a bit more advanced I think you can work out the easier case where when it goes to a continuum, the function goes to a well behaved normal function f(x) - |x> is pathological - but if you use ∑fi*|xi> things are not as hairy.

Here is the outline ∑∑fi*gj<xi|xj> = ∑fi*gi = ∑f'i*g'i Δ (note the square root trick in defining f'i and g'i and I have used <xi|xj> is the Kronecker delta). Now as Δ→0 you can see it becomes ∫f(x)g(x)dx. With what I wrote above you can fill in the missing steps.

Thanks
Bill
 
Last edited:
  • #5
Ibraheem said:
##<x_i|x_i>## gets smaller and smaller ?

:smile::smile::smile::smile::smile::smile::smile: - see my reply - it becomes infinite.

Don't get too worried though - if Von-Neumann couldn't work it all out (of course he worked that bit out - its nothing to a guy like him - but the full solution - now that's another matter) then it's no wonder any normal person like you or me can get confused - it's tricky and its full solution even trickier with some very advanced functional analysis thrown in.

But please do read the book on Distribution Theory I linked to, all physicists and applied mathematicians, hell even pure (or is that puerile - always get that confused :-p:-p:-p - just kidding the ones I know are great guys in fact) mathematicians should know it.

Thanks
Bill
 
Last edited:
  • #6
Thank you so much for taking the time to reply and explain this. But I have a one question. when you used Δx'j to define Δx'j = Δ, did you mean Δx'j as the difference in x'j? And could you clarify by what you mean with lattice. Also here:
bhobba said:
If Δ is the spacing between the lattice then you you define the now non normalized x'i = xi/√Δ
did you mean |x'i> = (1/√Δ)|xi>? because if it was actually x'i = xi/√Δ wouldn't that mean it is just |xi/√Δ > where <xi/√Δ |xi/√Δ >=1 so it is just another basis vector?Again, thank you for the explanation
 
Last edited:
  • #7
Assuming you meant |x'i> = (1/√Δ)|xi>.That means you changed the basis to this new basis corresponding to each discrete step xi. I understood the first part of why <x'i|x'j> equals the delta function and how this new basis vectors are of an infinite length, mainly because of 1/√Δ goes to infinity. However I am still confused about how this is applied to inner product of two function. Could you please elaborate on your notation for fi and gi. in what basis are they represented, in |xi> or |x'i>
bhobba said:
Here is the outline ∑∑fi*gj<xi|xj> = ∑fi*gi = ∑f'i*g'i Δ
Correct if I am wrong, is f'i and g'i equal to fi/√Δ and gj/√Δ ,respectively, would not that mean when Δ goes to zero the sum ∑f'i*g'i Δ will go to infinity since it is basically ∑fi*gi which goes to infinity meaning we are back square one?
 
  • #8
Ibraheem said:
Thank you so much for taking the time to reply and explain this. But I have a one question. when you used Δx'j to define Δx'j = Δ,

The easiest way to think of it is as a digital readout of position and the number of digits goes to infinity where you think of it as a continuum. That is what I mean by lattice. The spacing between different readout numbers is the same and I call it Δx and as the number of digits increases Δx→0 - of course the largest number also increases ie not just the number of decimal places so it goes to +- ∞

I could say more but I prefer you to think about it a bit.

Thanks
Bill
 

1. What is an inner product of functions of continuous variable?

The inner product of functions of continuous variable is a mathematical operation that takes two functions as input and produces a scalar value as output. It is used to measure the similarity or orthogonality of two functions.

2. How is the inner product of functions of continuous variable calculated?

The inner product of two functions f and g can be calculated using an integral over a specified interval. It is represented as 〈f, g〉 and is equal to the integral of f(x)g(x)dx over the interval.

3. What properties does the inner product of functions of continuous variable have?

The inner product has several important properties, including linearity, commutativity, and positive definiteness. Linearity means that the inner product of two functions multiplied by a scalar is equal to the scalar multiplied by the inner product of the two functions. Commutativity means that the inner product of two functions is the same regardless of the order in which they are multiplied. Positive definiteness means that the inner product of a function with itself is always positive unless the function is equal to zero.

4. What is the significance of the inner product of functions of continuous variable?

The inner product is a useful tool in mathematical analysis and is used in many areas of science and engineering. It allows us to measure the similarity or difference between two functions, and can be used to define important concepts like orthogonality and projection.

5. Can the inner product of functions of continuous variable be extended to functions of multiple variables?

Yes, the inner product can be extended to functions of multiple variables. In this case, it is referred to as the inner product of vector-valued functions. It is calculated in a similar way, using an integral over a specified region, and has similar properties to the inner product of functions of continuous variable.

Similar threads

  • Advanced Physics Homework Help
Replies
8
Views
810
  • Classical Physics
Replies
0
Views
151
  • Quantum Physics
Replies
6
Views
1K
Replies
19
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
846
  • Quantum Physics
Replies
31
Views
3K
  • Topology and Analysis
Replies
14
Views
2K
Replies
4
Views
758
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
1K

Back
Top