Discrete hellmann-Feynman theorem ?

  • Thread starter Thread starter juliette sekx
  • Start date Start date
  • Tags Tags
    Discrete Theorem
juliette sekx
Messages
31
Reaction score
0
Discrete hellmann-feynman theorem ??

The Helmann-Feynman theorem states that :
derivative of eigenvalue with respect to a parameter = eigenfunction dagger * derivative of operator with respect to parameter * eigenfunction

(assuming that the eigenfuncitons are normalized, otherwise the theorem would also include a normalization integration).

Does anyone know if this works for the DISCRETE CASE ??

ie:

derivative of eigenvalue with respect to parameter = eigenVECTOR dagger * derivative of MATRIX with respect to parameter * eigenVECTOR ??

I can't find a proof of it anywhere!
 
Physics news on Phys.org


In order for the theorem to be true, the matrix has to be Hermitian.

The proof would go something like this. Let A(s) be one-parameter smooth family of Hermitian matrices and \lambda(s) a smooth parametrization of the eigenvalue of A(s) corresponding to the eigenvector \mathbf{v}(s). (Assume that \mathbf{v} is normalized so that \mathbf{v}^{\dagger} \mathbf{v} = 1.) Then A \mathbf{v} = \lambda \mathbf{v} for all s. Differentiating this relation, we have
<br /> A&#039; \mathbf{v} + A \mathbf{v}&#039; = \lambda&#039; \mathbf{v} + \lambda \mathbf{v} \textrm{.}<br />
Note that \mathbf{v}^{\dagger} A \mathbf{v} = (A^{\dagger} \mathbf{v})^{\dagger} \mathbf{v} = (A \mathbf{v})^{\dagger} \mathbf{v}, since A is Hermitian; but A \mathbf{v} = \lambda \mathbf{v}, so \mathbf{v}^{\dagger} A \mathbf{v} = \lambda^{*} \mathbf{v}^{\dagger} \mathbf{v} = \lambda^{*} (where stars indicate complex conjugates). However, recall that Hermitian matrices have real eigenvalues; thus, \lambda^{*} = \lambda. Thus, multiplying the above equation by \mathbf{v}^{\dagger} on the left, we have
<br /> \mathbf{v}^{\dagger} A&#039; \mathbf{v} + \lambda = \lambda&#039; + \lambda \textrm{,}<br />
and, canceling \lambda from both sides, we obtain the desired result.
 


I was just rereading my work, and I discovered that there are a few typos in the previous post. The proof should have read:

<br /> (1) \quad \textrm{WLOG, } \mathbf{v}^{\dagger} \mathbf{v} = 1 \textrm{;}<br />

<br /> (2) \begin{align*} \mathbf{v}^{\dagger} A \mathbf{v}&#039; &amp;= (A \mathbf{v})^{\dagger} \mathbf{v}&#039;\\<br /> &amp;= \lambda^{*} \mathbf{v}^{\dagger} \mathbf{v}&#039;\\<br /> &amp;= \lambda \mathbf{v}^{\dagger} \mathbf{v}&#039; \quad \textrm{(since $\lambda$ is real);}<br /> \end{align*}<br />

<br /> (3) \begin{align*} A \mathbf{v} &amp;= \lambda \mathbf{v}\\<br /> \Rightarrow A&#039; \mathbf{v} + A \mathbf{v}&#039; &amp;= \lambda&#039; \mathbf{v} + \lambda \mathbf{v}&#039;\\<br /> \Rightarrow \mathbf{v}^{\dagger} A&#039; \mathbf{v} + \mathbf{v}^{\dagger} A \mathbf{v}&#039; &amp;= \lambda&#039; \mathbf{v}^{\dagger} \mathbf{v} + \lambda \mathbf{v}^{\dagger} \mathbf{v}&#039;\\<br /> \Rightarrow \mathbf{v}^{\dagger} A&#039; \mathbf{v} &amp;= \lambda&#039; \quad \textrm{(by (1) and (2)).}<br /> \end{align*}<br />
 


Very nice!

So this works not only for Hermitian matrices, but for real eigenvalues of any matrix!
 


So this works not only for Hermitian matrices, but for real eigenvalues of any matrix!

Not quite. In step (2), the first line requires that A be Hermitian (otherwise, it would read \mathbf{v}^{\dagger} A \mathbf{v}&#039; = (A^{\dagger} \mathbf{v})^{\dagger} \mathbf{v}&#039;, which does not give the required cancellation).
 


Right .. I totally missed that! Thanks! =)
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top