How can I find a solution valid for all cases?

  • Thread starter Thread starter weetabixharry
  • Start date Start date
Click For Summary
The discussion revolves around finding a matrix \textbf{B} that satisfies the equation tr{AB} = σ for all square matrices \textbf{A} with tr{A} = 1. It is established that \textbf{B} = σI is a valid solution, but the possibility of other solutions is questioned. Participants suggest focusing on specific forms of \textbf{A}, such as diagonal matrices, to derive constraints on \textbf{B}. The conversation highlights the importance of examining individual elements of the matrices to simplify the problem. Ultimately, the consensus leans towards the idea that \textbf{B} must be diagonal and likely equal to σI to fulfill the equation universally.
weetabixharry
Messages
111
Reaction score
0
How can I find a solution valid for "all" cases?

I have an equation:

tr\left\{\textbf{AB}\right\} = \sigma

where tr\left\{\right\} denotes the matrix trace. The square matrix \textbf{A} is independent of both the square matrix \textbf{B} and the real scalar \sigma.

I want to determine all possible values of \textbf{B} that will allow the above equation to hold for all \textbf{A}, given the only constraint:

tr\left\{\textbf{A}\right\} = 1

For example, I can see that \textbf{B}=\sigma \textbf{I} will always be valid (where \textbf{I} is the identity matrix). But can I guarantee that there are no other possible values for \textbf{B}?

I have been pondering this problem for some time and cannot see a way of approaching it. Any advice would be greatly appreciated!
 
Physics news on Phys.org


Let A_{i,j} denote a matrix whose (i,j) entry is 1 and the rest of whose entries are zero. If the equation was true for all matrices it would work for each A_{i,j}. Is that going to be possible?
 


Stephen Tashi said:
Let A_{i,j} denote a matrix whose (i,j) entry is 1 and the rest of whose entries are zero. If the equation was true for all matrices it would work for each A_{i,j}. Is that going to be possible?

I think we can only consider the A_{i,i} matrices (i.e. a diagonal entry is 1), in order to satisfy the trace constraint on A. However, I'll look into an approach like this. I have found a possible solution in a similar way - by considering the (i,j)th element. I'll check it over and post it here if I think it has any chance of being correct...
 


I hope someone will have the time to read through the following attempt and point out any errors in my reasoning:

The \left( i,i\right) ^{th} element of \boldsymbol{AB} can be written as:

<br /> \left[ \boldsymbol{AB}\right] _{i,i}=\sum_{j=1}^{N}a_{ij}b_{ji}<br /> <br />

Therefore the trace is:

<br /> {tr}\left\{ \boldsymbol{AB}\right\}<br /> =\sum_{i=1}^{N}\sum_{j=1}^{N}a_{ij}b_{ji}=\sigma <br />

Our only constraint is:

<br /> <br /> {tr}\left\{ \boldsymbol{A}\right\} =\sum_{i=1}^{N}a_{ii}=1<br /> <br />

Therefore, we can write:

<br /> <br /> \begin{eqnarray*}<br /> {tr}\left\{ \boldsymbol{AB}\right\} &amp;=&amp;\sigma {tr}\left\{ <br /> \boldsymbol{A}\right\} \\<br /> \sum_{i=1}^{N}\sum_{j=1}^{N}a_{ij}b_{ji} &amp;=&amp;\sigma \sum_{i=1}^{N}a_{ii}<br /> \end{eqnarray*}<br />

Clearly, the right hand side is only a function of diagonal terms.
Therefore, the a_{ij} (i\neq j) terms must be eliminated from the left
hand side by assuming all b_{ji} (j\neq i) terms are zero.

Therefore \boldsymbol{B} must be diagonal and:

<br /> <br /> \sum_{i=1}^{N}a_{ii}b_{ii}=\sigma \sum_{i=1}^{N}a_{ii}<br /> <br />

The only way (I think) \boldsymbol{B} can satisfy this in general is if
the b_{ii} terms can be removed from the summation (i.e. they are a
constant). Clearly, that constant is \sigma:

<br /> \boldsymbol{B}=\sigma \boldsymbol{I}<br />
 


I didn't quite follow your argument about diagonal terms, but I think you have at least started on the right track. I also think you're making it a bit more complicated than it needs to be. I would begin like this: Let A be an arbitrary matrix such that Tr A=1. Let B be an arbitrary matrix such that Tr(AB)=σ. Then Tr(AB)=σTr(A).

Now start making specific choices of A, to see how they constrain B. For example, what does the equality Tr(AB)=σTr(A) say when A11=1 and all other components of A are =0?
 


Thanks for the advice. I think this idea of choosing various specific examples for \textbf{A} is a very good one and could simplify matters significantly. I'll see what I can come up with...
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 4 ·
Replies
4
Views
6K
  • · Replies 9 ·
Replies
9
Views
1K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 6 ·
Replies
6
Views
7K
  • · Replies 2 ·
Replies
2
Views
3K