Showing that the Entries of a Matrix Arise As Inner Products

  • Thread starter Thread starter Bashyboy
  • Start date Start date
  • Tags Tags
    Matrix
Click For Summary
SUMMARY

The discussion focuses on proving that for a positive semi-definite matrix \( B \in M_n(\mathbb{C}) \) with diagonal entries equal to one, there exists a set of \( n \) unit vectors \( \{e_1, \ldots, e_n\} \subset \mathbb{C}^n \) such that the entries of \( B \) can be expressed as inner products \( b_{ij} = \langle e_i, e_j \rangle \). The initial proof for the 2x2 case is successfully demonstrated, while the 3x3 case presents complexities that require further exploration. The user seeks elegant solutions or theorems to simplify the process of finding the unit vectors in higher dimensions.

PREREQUISITES
  • Understanding of positive semi-definite matrices in linear algebra.
  • Familiarity with inner product spaces and unit vectors.
  • Knowledge of unitary transformations and their properties.
  • Basic proficiency in complex numbers and their geometric interpretations.
NEXT STEPS
  • Research the properties of positive semi-definite matrices and their implications in linear algebra.
  • Study the Gram-Schmidt process for constructing orthonormal bases in inner product spaces.
  • Explore the application of unitary matrices in simplifying complex vector transformations.
  • Investigate advanced theorems related to inner products and matrix representations in higher dimensions.
USEFUL FOR

Mathematicians, students studying linear algebra, and researchers in fields requiring matrix theory and inner product spaces will benefit from this discussion.

Bashyboy
Messages
1,419
Reaction score
5

Homework Statement


Let ##B \in M_n (\mathbb{C})## be such that ##B \ge 0## (i.e., it is a positive semi-definite matrix) and ##b_{ii} = 1## (ones along the diagonal). Show that there exists a collection of ##n## unit vectors ##\{e_1,...,e_n \} \subset \mathbb{C}^n## such that ##b_{ij} = \langle e_i, e_j \rangle##.

Homework Equations

The Attempt at a Solution


Note, this took a great deal of time to type, and I hope one would be so kind as to reply! :)

Now, obviously any set collection of ##n## unit vectors will satisfy the condition that ##\langle e_i , e_i \rangle = 1 = b_{ii}##, for all ##i \in \{1,...,n\}##. Hence, the choice of unit vectors will only depend upon the off-diagonal terms.

I tried proving the theorem in low dimensions, hoping that I might glimpse some natural choice of unit vectors that will extend to the ##n \times n## case. For instance, in the 2x2 case, we have that

##\begin{bmatrix} 1 & b_{12} \\ \overline{b_{12}} & 1 \\ \end{bmatrix} \ge 0 \implies |b_{12}| \le 1##

Letting ##b_{12} = |b_{12}| e^{i \theta_{12}}##, then the choice of unit vectors would be ##e_1 = \begin{bmatrix} e^{i \theta_{12}} \\ 0 \end{bmatrix}## and ##e_2 = \begin{bmatrix} |b_{12}| \\ \sqrt{1 - |b_{12}|^2} \end{bmatrix}##, both of which are obviously unit-vectors. Computing the inner-products, we arrive at ##\langle e_1 , e_2 \rangle = |b_{12}| e^{i \theta_{12}} = b_{12}## and ##\langle e_2, e_1 \rangle = \overline{\langle e_1 , e_2 \rangle} = \overline{b_{12}}##, which finishes the proof in the 2x2 case.

However, things in the 3x3 become particularly unilluminating, without there being any seemingly natural choice that would help in the nxn case. So, in the 3x3 case, we need to find ##e_1##, ##e_2##, and ##e_3## such that ##\langle e_1, e_2 \rangle = b_{12}##, ##\langle e_1, e_3 \rangle = b_{13}##, etc. Because unitary matrices preserve the inner product, I know I can choose a unitary ##U## which will rotate all the unit vectors; in particular, I can rotate ##e_1## so that it becomes ##\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}##, which simplify things by reducing the number of variables. Now, let

##e_2 = \begin{bmatrix} z_1 \\ z_2 \\ z_3 \\ \end{bmatrix}##

##e_3 = \begin{bmatrix} z_4 \\ z_5 \\ z_6 \end{bmatrix}##,

demanding that ##||e_2|| = 1## and ##||e_3|| = 1## be true. So, we want to find ##z_1,...,z_5## such that

##\langle e_1 , e_2 \rangle = b_{12} \implies z_1 = b_{12}##

##\langle e_1, e_3 \rangle = b_{13} \implies z_4 = b_{13}##

##\langle e_2 , e_3 \rangle = b_{23} \implies \overline{z_4} z_1 + \overline{z_5} z_2 + \overline{z_6} z_3 = b_{23}## or upon substitution ##\overline{b_{13}} b_{12} + \overline{z_5} z_2 + \overline{z_6} z_3 = b_{23}##As one can easily perceive, this is a hideous mess, and will only grow more hideous with larger dimensions. My question is, is there some clever trick or elegant theorem I could be employing? If so, would you mind providing some hints? The only clever trick I could come up with was the rotation with ##U##.
 
Physics news on Phys.org
Consider the inner product ##\mathbf{x}B\mathbf{y}^T##.
 
Where x and y are just any vectors in ##\mathbb{C}^n##? Why are you taking the transpose of ##y##? The inner-product I am using is ##\langle x,y \rangle := y^* x##.
 
If anyone else has any suggestions, I would appreciate them being shared.
 

Similar threads

  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 10 ·
Replies
10
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
6
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
10
Views
10K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K