How to Handle Expectation Value of 1/(1+x) in Quantum Mechanics?

Click For Summary

Discussion Overview

The discussion revolves around the expectation value of the operator ##f(x) = 1/(1+x)## in quantum mechanics, particularly how to derive its expression in position representation. Participants explore the implications of expanding operators in power series and the convergence of such expansions, as well as the mathematical treatment of operators acting on position eigenstates.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant describes the process of deriving the integral expression for the inner product involving the operator ##f## by using completeness relations of position eigenkets.
  • Another participant questions the necessity of expanding ##f## in a power series, suggesting that such expansions may not yield well-defined expressions for all functions.
  • There is a discussion about the mathematical justification for changing the operator ##\hat{x}## to a number ##x## when evaluating the action of ##f(\hat{x})## on position eigenkets.
  • Participants express uncertainty about the convergence of the power series for the operator ##f(x) = 1/(1+x)## and its implications for the expectation value.
  • Some participants propose that the operator ##1/(1+x)## retains the same eigenkets as the operator ##x##, while others seek clarification on the conditions under which such expansions are valid.
  • There is mention of the non-trivial nature of position eigenvectors and the challenges they present in a rigorous mathematical treatment.
  • One participant suggests that the expansion of operators into power series is a common technique, comparing it to the treatment of spin rotation operators.

Areas of Agreement / Disagreement

The discussion contains multiple competing views regarding the validity and implications of expanding the operator ##f(x)## in power series. Participants express differing opinions on the convergence of such expansions and the mathematical treatment of operators in quantum mechanics, indicating that the discussion remains unresolved.

Contextual Notes

Participants highlight limitations related to the convergence of power series and the mathematical rigor required for treating operators acting on position eigenstates. The discussion reflects a range of assumptions and conditions that may affect the validity of the proposed approaches.

blue_leaf77
Science Advisor
Messages
2,637
Reaction score
786
I have an inner product ## \langle \alpha|f| \beta \rangle## where ##f## is an operator that is a function of position ##x## operator (1D). According to the book I read (and I'm sure in any other book as well), that inner product can be written in position representation as ## \int u^*_{\alpha}(x) f(x) u_{\beta}(x) dx ##. This book doesn't discuss how to get this integral expression but I can guess how it's done, namely sandwiching the operator ##f## by two completeness relations of position eigenkets. In doing so we get ##f## sandwiched by a ket and a bra of ##x## and since ##f## is a function of ##x## operator, it can be expanded into power series of ##x##. This way we will have a dirac delta ##\delta (x'-x") ## which will simplify to the integral above. But my question is what if ##f## is, for example, ##1/(1+x) ## whose power series only converges between x=-1 and 1? What will happen with the integral of all space in the expectation value of ##f(x)##? Or is my argument expanding f into power series wrong?
 
Physics news on Phys.org
Why do you want to expand f in a power series of x? That expansion happens in terms of different values of x, not in terms of functions xn.

(I'm not sure if f(x)=1/(1+x) would give a well-defined expression here but there are other functions where the power series does not converge to the function).
 
mfb said:
Why do you want to expand f in a power series of x?
So what I want to do is to get ## \int u^*_{\alpha}(x) f(x) u_{\beta}(x) dx ## from ## \langle \alpha |\hat{f}(\hat{x})| \beta \rangle ## with hat represents operator. In my mind, the way I get the integral expression is that by first placing two ## \int dx |x\rangle \langle x| ## to the left and to the right of ##\hat{f}##. This way I will get the matrix element ## \langle x' |\hat{f} | x" \rangle## in my expression. Now I can simplify further if I expand f into powers of x operator then let the operators ##x^n## act on ##|x"\rangle##, and I will end up with the integral expression in the end. Is my way correct? So in short I want to derive the expression for an expectation value in term of wavefunctions from that in term of ket and bra.
mfb said:
That expansion happens in terms of different values of x, not in terms of functions xn.
Why in terms of different values of x? If I expand a function in its Taylor terms I will get terms of powers of x. Did you misunderstand it as an expansion into bases of ##|x\rangle##? If so that's not what I meant, f is an operator not a state.
 
Last edited:
##\langle x' |\hat{f} | x" \rangle## looks like ##\delta(x'-x'') f(x')##. Otherwise I don't understand what your operator is doing.
blue_leaf77 said:
If so that's not what I meant, f is an operator not a state.
Sure, but the |x> are.
 
##(1+x)^{-1}## is the inverse of 1+x, no?
 
mfb said:
⟨x′|f^|x"⟩\langle x' |\hat{f} | x" \rangle looks like δ(x′−x′′)f(x′)\delta(x'-x'') f(x'). Otherwise I don't understand what your operator is doing.
That's my point, basically what I was trying to understand is how I can get the latter expression from the former. Are there any intermediate steps in between, or are they simply defined to be equal, namely ## \langle x'| \hat{f} | x" \rangle = f(x) \delta(x'-x") ##?
 
Indeed, you have, for a true pure state, i.e., for a normalizable ket with ##\langle \psi|\psi \rangle=1##,
$$\langle f(x) \rangle=\langle \psi|f(\hat{x}) \psi \rangle=\int_{\mathbb{R}} \mathrm{d} x \langle \psi|f(\hat{x}) x \rangle \langle x|\psi \rangle = \int_{\mathbb{R}} \mathrm{d} x \langle \psi| x \rangle \langle x|\psi \rangle f(x)=\int_{\mathbb{R}} \mathrm{d} x |\psi^2(x)|f(x).$$
The same you can achieve by using the matrix elements of your operator wrt. the position eigenbasis
$$f(x',x)=\langle x' |f(\hat{x}) x \rangle=f(x) \langle x'|x \rangle=f(x) \delta(x-x').$$
Now you have
$$\langle f(x) \rangle=\langle \psi|f(\hat{x}) \psi \rangle=\int_{\mathbb{R}}\mathrm{d} x' \int_{\mathbb{R}}\mathrm{d} x \langle \psi|x' \rangle \langle x'|f(\hat{x}) x \rangle \langle x|\psi \rangle=\int_{\mathbb{R}}\mathrm{d} x' \int_{\mathbb{R}}\mathrm{d} x \psi^*(x') f(x',x) \psi(x)=\int_{\mathbb{R}} \mathrm{d} x |\psi(x)|^2 f(x).$$
All this shows from the general form of Born's rule that for the position representation ##|\psi(x)^2## is the probability density for the position of the particle.
 
In your second equation line you wrote ## f(\hat{x}) |x \rangle = f(x) |x \rangle ## where the x in the variable of f is changed from an operator (denoted by the hat) to a mere number. I know that the ket the operator ##f(\hat{x})## acts on is of position eigenket, but what is the mathematical proof which guarantees that we can just change the x in f from an operator to a number while we don't need to know the particular functional form of ##f(\hat{x})##? I can accept it if, for example, ##f(\hat{x}) = \hat{x}^2## since in that case it's clear that ##\hat{x}## will act twice on its own eigenket giving a number ##x^2##. But what if, as in my original problem, ##f(\hat{x}) = 1/(1+\hat{x})##. What shall I do to know the manner how it would act on ##|x \rangle##?
 
blue_leaf77 said:
In your second equation line you wrote ## f(\hat{x}) |x \rangle = f(x) |x \rangle ## where the x in the variable of f is changed from an operator (denoted by the hat) to a mere number. I know that the ket the operator ##f(\hat{x})## acts on is of position eigenket, but what is the mathematical proof which guarantees that we can just change the x in f from an operator to a number while we don't need to know the particular functional form of ##f(\hat{x})##? I can accept it if, for example, ##f(\hat{x}) = \hat{x}^2## since in that case it's clear that ##\hat{x}## will act twice on its own eigenket giving a number ##x^2##. But what if, as in my original problem, ##f(\hat{x}) = 1/(1+\hat{x})##. What shall I do to know the manner how it would act on ##|x \rangle##?
This is in deed a problem for generak functions. I tried to give you a hint, how to proceed. From your argument, it is clear that 1+x is diagonal in position representation. Show that this will also hold true for the inverse operator.
 
  • #10
I'd think, that it's the definition of what's meant by a function of an operator. Of course, what I've done in my posting above is not mathematically rigorous. The position eigenvectors are highly non-trivial. Particularly they are of course no Hilbert-space vectors. For a more rigorous treatment of all this, see, e.g., the textbook by Galindo and Pacual.
 
  • #11
Thanks for your hint DrDU.
A product between two diagonal matrices proceeds like element-wise product, that is the element of the resulting matrix is the product between the corresponding elements of the composite matrices. The inverse matrix of A is defined to be A-1A=AA-1=I, so if A is diagonal, so is its inverse. Ok up to this point I can see why the operator 1/(1+x) also has the same eigenkets as the operator x.
I just want to confirm this, is it not always correct expanding an operator ##f(\hat{x})## into its power terms as we do for exponential operator (e.g. rotation operator and translation operator)?
 
  • #12
vanhees71 said:
The position eigenvectors are highly non-trivial
I presume it can be a general problem that can happen to finite and denumerable bases too, for example spin eigenstates. The most common example is spin rotation operator ##\exp(iS_z \phi/\hbar)##, to know how that operator would act on spin up or down state people expand it in terms of power series of Sz so the problem will become trivial. So again my question is can we always expand any function of operator into power series?
 
  • #13
For finite-dimensional vector spaces, this is not such a complicated issue. The matrices build algebraically a ring, and you can define norms that are compatible with this algebraic structure, e.g., the induced norm from the scalar product
$$\|\hat{A} \|=\sup \{\|A |u \rangle \|; \|u\| \}=1$$
or a pratically more simple norm
$$\|\hat{A} \|^2=\mathrm{Tr} (\hat{A} \hat{A}^{\dagger})=\sum_{j,k=1}^n |A_{jk}|^2.$$
Then you can define functions as in analysis with numbers via polynomials, which are defined algebraically and power series, etc.

Particularly the matrix exponential function is well defined for all matrices, since the defining series is absolutely convergent (in the sense of the matrix norm).
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 20 ·
Replies
20
Views
4K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 8 ·
Replies
8
Views
3K