Showing that the Entries of a Matrix Arise As Inner Products

  • Thread starter Thread starter Bashyboy
  • Start date Start date
  • Tags Tags
    Matrix
Click For Summary

Homework Help Overview

The problem involves a positive semi-definite matrix \( B \) with ones along the diagonal, and the task is to show the existence of a collection of unit vectors such that the entries of the matrix correspond to their inner products. The subject area pertains to linear algebra and inner product spaces.

Discussion Character

  • Exploratory, Assumption checking, Conceptual clarification

Approaches and Questions Raised

  • The original poster attempts to prove the theorem in low dimensions, starting with the 2x2 case and extending to 3x3, while expressing difficulty in finding a natural choice of unit vectors for higher dimensions. They also explore the use of unitary matrices to simplify the problem.
  • Some participants question the formulation of the inner product and the notation used, particularly regarding the transpose of one of the vectors.
  • Others express a desire for additional suggestions or hints to navigate the complexity of the problem.

Discussion Status

Contextual Notes

Participants note the complexity increases with dimension, and there is a specific focus on ensuring the unit vectors maintain the required properties while satisfying the inner product conditions. The original poster expresses a need for clever tricks or theorems that might simplify the problem.

Bashyboy
Messages
1,419
Reaction score
5

Homework Statement


Let ##B \in M_n (\mathbb{C})## be such that ##B \ge 0## (i.e., it is a positive semi-definite matrix) and ##b_{ii} = 1## (ones along the diagonal). Show that there exists a collection of ##n## unit vectors ##\{e_1,...,e_n \} \subset \mathbb{C}^n## such that ##b_{ij} = \langle e_i, e_j \rangle##.

Homework Equations

The Attempt at a Solution


Note, this took a great deal of time to type, and I hope one would be so kind as to reply! :)

Now, obviously any set collection of ##n## unit vectors will satisfy the condition that ##\langle e_i , e_i \rangle = 1 = b_{ii}##, for all ##i \in \{1,...,n\}##. Hence, the choice of unit vectors will only depend upon the off-diagonal terms.

I tried proving the theorem in low dimensions, hoping that I might glimpse some natural choice of unit vectors that will extend to the ##n \times n## case. For instance, in the 2x2 case, we have that

##\begin{bmatrix} 1 & b_{12} \\ \overline{b_{12}} & 1 \\ \end{bmatrix} \ge 0 \implies |b_{12}| \le 1##

Letting ##b_{12} = |b_{12}| e^{i \theta_{12}}##, then the choice of unit vectors would be ##e_1 = \begin{bmatrix} e^{i \theta_{12}} \\ 0 \end{bmatrix}## and ##e_2 = \begin{bmatrix} |b_{12}| \\ \sqrt{1 - |b_{12}|^2} \end{bmatrix}##, both of which are obviously unit-vectors. Computing the inner-products, we arrive at ##\langle e_1 , e_2 \rangle = |b_{12}| e^{i \theta_{12}} = b_{12}## and ##\langle e_2, e_1 \rangle = \overline{\langle e_1 , e_2 \rangle} = \overline{b_{12}}##, which finishes the proof in the 2x2 case.

However, things in the 3x3 become particularly unilluminating, without there being any seemingly natural choice that would help in the nxn case. So, in the 3x3 case, we need to find ##e_1##, ##e_2##, and ##e_3## such that ##\langle e_1, e_2 \rangle = b_{12}##, ##\langle e_1, e_3 \rangle = b_{13}##, etc. Because unitary matrices preserve the inner product, I know I can choose a unitary ##U## which will rotate all the unit vectors; in particular, I can rotate ##e_1## so that it becomes ##\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}##, which simplify things by reducing the number of variables. Now, let

##e_2 = \begin{bmatrix} z_1 \\ z_2 \\ z_3 \\ \end{bmatrix}##

##e_3 = \begin{bmatrix} z_4 \\ z_5 \\ z_6 \end{bmatrix}##,

demanding that ##||e_2|| = 1## and ##||e_3|| = 1## be true. So, we want to find ##z_1,...,z_5## such that

##\langle e_1 , e_2 \rangle = b_{12} \implies z_1 = b_{12}##

##\langle e_1, e_3 \rangle = b_{13} \implies z_4 = b_{13}##

##\langle e_2 , e_3 \rangle = b_{23} \implies \overline{z_4} z_1 + \overline{z_5} z_2 + \overline{z_6} z_3 = b_{23}## or upon substitution ##\overline{b_{13}} b_{12} + \overline{z_5} z_2 + \overline{z_6} z_3 = b_{23}##As one can easily perceive, this is a hideous mess, and will only grow more hideous with larger dimensions. My question is, is there some clever trick or elegant theorem I could be employing? If so, would you mind providing some hints? The only clever trick I could come up with was the rotation with ##U##.
 
Physics news on Phys.org
Consider the inner product ##\mathbf{x}B\mathbf{y}^T##.
 
Where x and y are just any vectors in ##\mathbb{C}^n##? Why are you taking the transpose of ##y##? The inner-product I am using is ##\langle x,y \rangle := y^* x##.
 
If anyone else has any suggestions, I would appreciate them being shared.
 

Similar threads

  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
6
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
10
Views
10K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K