# Limit of expression containing unknown function.

1. Nov 26, 2013

### D_Tr

I would like to get an answer or pointers to suitable material, on the following question:

I know that ∫|f(x)|2dx is finite. Can we say that lim x→±∞ x*(d/dx)|f(x)|2 is zero? Are there any theorems about such limits with unknown functions that have some known properties? Basically this question arose from Griffith's introductory QM textbook which I started studying today (self study). f(x) is Ψ(x) and the author at page 16 takes the above limits to be zero "on the ground that Ψ goes to zero at ±∞". But it may go to zero asymptotically, and x goes to infinity, so we have an indeterminate form and we know only that the function's absolute value squared has a finite integral. I posted here, however, since I would like to get a more general answer on these types of limits and to be at least aware of the theory that deals with them.

2. Nov 27, 2013

### Simon Bridge

Given: $$I=\int_{-\infty}^\infty |f(x)|^2\;\text{d}x$$ ... if $I$ converges, then; $$\lim_{x\rightarrow\pm\infty} x\frac{d}{dx}|f(x)|^2 = 0$$

(We also know from context that f(x) is a solution to the Schrödinger equation.)

I suspect there are other implied conditions... but I think it pretty much follows from the definition of an integral and the conditions for convergence.

What was the author trying to show by that particular limit?

3. Nov 27, 2013

### D_Tr

Thanks a lot :) The author calculated the time derivative of <x>, just before introducing the notion of the operator. In one of the few steps he eliminated the following term which appeared as a result of integration by parts:
[x$\frac{d}{dx}$|Ψ|$^{2}$]$^{-∞}_{∞}$, "on the grounds that Ψ goes to zero at plus and minus infinity". It seems obvious now but I was a bit confused by the fact that the expression was multiplied by x and that the derivative was written in its expanded form containing he complex Ψ and Ψ* functions.

4. Nov 28, 2013

### Simon Bridge

No worries - it can take a while to learn to read these things ;)

5. Nov 29, 2013

### Office_Shredder

Staff Emeritus
This isn't actually true...for example f(x)2 can be the function which is 0 everywhere except for in the interval [n,n+1/n] where it bumps up to a value of 1/n and then back down to zero. Then the integral is equal to the sum
$$\sum_{n=1}^{\infty} 1/n^2$$
which converges, but when it bumps up the derivative could be arbitrarily large; so that limit could fail to be zero even before you include multiplying by x.

Probably the condition that f(x) is supposed to be satisfying here is something like |f(x)|2 is convex if f(x) is large enough (equivalently |f(x)|2 is decreasing for large enough values of x - if |f(x)|2 is decreasing down to zero and you don't have those bumps, then it has to die faster than 1/x to be integrable, so the derivative will look like 1/x2 or something smaller (this is a handwavy argument but can probably be made rigorous).

6. Nov 29, 2013

### Simon Bridge

I had thought of that ...
Does it matter that the derivative (per your example) in the interval that f is non-zero could be arbitrarily large?
The limit is taken arbitrarily far away from that interval - where the derivative is zero.