Graduate What Conditions Allow the Derivative Trick for Evaluating Fermionic Commutators?

Click For Summary
The discussion centers on the application of a theorem regarding the evaluation of fermionic commutators, specifically using creation and annihilation operators. The theorem requires that certain conditions are met, but the user finds discrepancies when applying it to their Hamiltonian, leading to confusion about why the derivative trick seems to yield correct results despite not satisfying the theorem's conditions. Key insights reveal that the commutator's bilinearity and the Leibniz product rule are essential for understanding why the derivative trick works for a broader class of functions. The conversation also emphasizes the need for explicit calculations when dealing with fermionic operators due to their unique properties, such as anticommutation. Ultimately, the discussion seeks to clarify the general conditions under which this derivative trick can be effectively applied.
thetafilippo
Messages
9
Reaction score
0
I found a theorem that states that if A and B are 2 endomorphism that satisfies $$[A,[A,B]]=[B,[A,B]]=0$$ then $$[A,F(B)]=[A,B]F'(B)=[A,B]\frac{\partial F(B)}{\partial B}$$.

Now I'm trying to apply this result using the creation and annihilation fermionics operators $$B=C_k^+$$ and $$A=C_k$$ and the simple diagonal hamiltonian $$F(\not C_k,C_k^+)=H=\sum_k \hbar \omega_k C_k^+C_k$$.

Now i check if my operators satisfies the hypothesis of the theorem and i get
$$[A,[A,B]]=[C_k,[C_k,C_k^+]]=-2C_k$$
$$[B,[A,B]]=[C_k^+,[C_k,C_k^+]]=+2C_k^+$$
Evidently
$$0\neq[A,[A,B]]\neq[B,[A,B]]\neq0$$,

However thinking at the hamiltonian H as a function of the creation operator only and applying the theorem directly

$$[C_k,H]=[C_k,C_k^+]\frac{\partial F(\not C_k,C_k^+)}{\partial C_k^+}=\hbar\omega_kC_k$$,

that is the right result for the commutator, evaluated without using this theorem.

So how i can interpret this fact? Why this works? What I'm missing between the theorem and this application?
In the calculation i think at H as a function only of the creation operator, is correct in this case?

I'd like to know what are the most general conditions that allows to use this simple trick to evaluate commutators. Or at least to find a theorem that rules this sort of things. Could anyone help me to understand?
 
Physics news on Phys.org
So, fermion operators anti-commute. The theorem is stated in terms of commutators.

##\{C,C^\dagger\} = CC^\dagger + C^\dagger C = 1##

Therefore

##[C,C^\dagger] = CC^\dagger - C^\dagger C = 2CC^\dagger##

Apparently

##[C,[C,C^\dagger]] = 2C \ne 0##
 
I show that in my post, i know. The questions are:
-Why the derivative works, in this case, and provide the correct result for the commutator between the creation operator and the hamiltonian?
- What are the most general conditions that allows to use this simple trick to evaluate commutators? Or at least to find a theorem that rules this sort of things.
 
Really?

##[C_k,C_k^{\dagger}]\frac{\partial F(C_k,C_k^{\dagger})}{\partial C_k^{\dagger}} = 2\hbar\omega_k C_k C_k^{\dagger}C_k##

Doesn't seem to work for me?

Ah, I see one step missing. I'm still off by a factor of 2.

Try proving the theorem for
## [A,\{A,B\}] = [B,\{A,B\}] = 0##
 
Last edited:
thetafilippo said:
What are the most general conditions that allows to use this simple trick to evaluate commutators? Or at least to find a theorem that rules this sort of things.
Start with commutators only, i.e., bosonic c/a operators. It turns out that the derivative "trick" works for a very large class of functions, i.e., $$[a, f(a^*)] ~\propto~ f'(a^*) ~.$$ The underlying "reason" why it works (afaict) is because the commutator is (bi)linear and satisfies the Leibniz product rule (which I mentioned in another thread recently). These are 2 key properties of ordinary derivatives. Moreover, the "operator" ##[a, ?]## acting on ##a^*## gives an ordinary number, i.e., reduces the "power" of ##a^*##, which is also what a derivative does.

If you search through old threads about this subject you'll find some by me where I explain how to extend the result from simple polynomial functions to general analytic functions.

Edit: here's the thread I was thinking of.

-Why the derivative works, in this case, and provide the correct result for the commutator between the creation operator and the hamiltonian?
For fermionic operators (satisfying anticommutators) you've just got to work it out explicitly -- which should be relatively easy since the complexity of the function ##f## is now severely restricted by properties like ##a^2 = 0 = (a^*)^2##.
 
Last edited:
  • Like
Likes Paul Colby
Time reversal invariant Hamiltonians must satisfy ##[H,\Theta]=0## where ##\Theta## is time reversal operator. However, in some texts (for example see Many-body Quantum Theory in Condensed Matter Physics an introduction, HENRIK BRUUS and KARSTEN FLENSBERG, Corrected version: 14 January 2016, section 7.1.4) the time reversal invariant condition is introduced as ##H=H^*##. How these two conditions are identical?

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 88 ·
3
Replies
88
Views
23K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 1 ·
Replies
1
Views
7K
  • · Replies 175 ·
6
Replies
175
Views
26K
  • · Replies 28 ·
Replies
28
Views
7K
Replies
3
Views
3K