Are There Nontrivial Solutions to the Matrix Equation X C X^T = C?

  • I
  • Thread starter vibe3
  • Start date
  • Tags
    Matrix
In summary: can take its eigenspaces and project into the range of unity (these eigenspaces form a rank one linear system), or you can fill in a particular eigenspace and use the eigenvalues and eigenvectors to reconstruct the original matrix.
  • #1
vibe3
46
1
Hello, if I have some given vector [itex]c \in R^n[/itex], then I want to find solutions [itex]X \in R^{n\times n}[/itex] to the following equation:
[tex]
X C X^T = C
[/tex]
where [itex]C = c c^T[/itex]. Certainly [itex]X = I[/itex] is a solution, but I'm looking for any nontrivial solutions. We can also assume [itex]X[/itex] is invertible if that helps.

This equation seems related to the Sylvester equation. If the solution is nontrivial, I would appreciate if someone could point me in the direction of any work which has been done on this type of equation.
 
Physics news on Phys.org
  • #2
vibe3 said:
Hello, if I have some given vector [itex]c \in R^n[/itex], then I want to find solutions [itex]X \in R^{n\times n}[/itex] to the following equation:
[tex]
X C X^T = C
[/tex]
where [itex]C = c c^T[/itex]. Certainly [itex]X = I[/itex] is a solution, but I'm looking for any nontrivial solutions. We can also assume [itex]X[/itex] is invertible if that helps.

This equation seems related to the Sylvester equation. If the solution is nontrivial, I would appreciate if someone could point me in the direction of any work which has been done on this type of equation.
Assume you have chosen a basis such that ##c=(1,0,\ldots,0)^T##. How many solutions do you get?
 
  • #3
Isn't C then a scalar? But then ## CXX^{T} ## is a matrix and C is a scalar. Am I missing something?
 
  • #4
WWGD said:
Isn't C then a scalar? But then ## CXX^{T} ## is a matrix and C is a scalar. Am I missing something?
Depends on how you write vectors, as a column or as a row: one gives a scalar and the other a rank one matrix. Either case has many solutions.
 
  • #5
fresh_42 said:
Depends on how you write vectors, as a column or as a row: one gives a scalar and the other a rank one matrix. Either case has many solutions.
But how can an ## n \times n ## matrix equal a ## 1 \times 1 ## unless n=1? EDIT: Do you mean we extend the scalar C into a matrix (C,0,0,..,0)?
 
  • #6
WWGD said:
But how can an ## n \times n ## matrix equal a ## 1 \times 1 ## unless n=1?
You mean if ##C## is a scalar? Then ##XCX^T=C=CI## by the usual identification of the two ones in algebras with unit. Otherwise ##\begin{bmatrix}c_1\\ \vdots \\c_n\end{bmatrix} \cdot \begin{bmatrix}c_1 & \ldots & c_n \end{bmatrix}## is a ##n \times n## matrix of rank ##1##.
 
  • #7
I mean, which case do you assume then, that C is ## n \times n ## or a "scalar matrix"?
 
  • #8
WWGD said:
I mean, which case do you assume then, that C is ## n \times n ## or a "scalar matrix"?

C is an n-by-n matrix of rank 1
 
  • Like
Likes WWGD
  • #9
vibe3 said:
C is an n-by-n matrix of rank 1
Thanks, kind of reminds me of least-square problems, solved by ortho projections.
 
  • #10
It's not clear to me what kind of "solution" OP is looking for since this problem is very much underdetermined. There are some interesting underlying relations here between singular vectors of ##\mathbf X## and the structure of ##\mathbf C##, but that's about all I can add.

note: if ##\mathbf c = \mathbf 0##, any ##\mathbf X## is allowed.
- - - -
Now assume ##\mathbf c \neq \mathbf 0##. Thus ##\mathbf C = \mathbf{cc}^T## is a rank one matrix.

We can do singular value decomposition on ##\mathbf X## such that ##\mathbf X = \mathbf{U \Sigma V}^T##

For ##\mathbf{X C X}^T = \mathbf{X c} \mathbf c^T \mathbf X^T= \mathbf{cc}^T##, we know that ##\mathbf c## is given by one (or more) of the right singular vectors of ##\mathbf X## that has a singular value of 1 ##^{**}##. Why? Using the trace operation we see

##trace\big(\mathbf{cc}^T\big) = \mathbf c ^T \mathbf c = trace\big(\mathbf{X c} \mathbf c^T \mathbf X^T\big) ##

##\mathbf c ^T \mathbf c = trace\big(\mathbf{X c} \mathbf c^T \mathbf X^T\big) = trace\big(\mathbf c^T \mathbf X^T \mathbf{Xc}\big) = \mathbf c^T \mathbf X^T \mathbf{Xc} = \mathbf c^T \mathbf V \mathbf \Sigma^T \mathbf U^T \mathbf {U \Sigma V}^T \mathbf c = \mathbf c^T \mathbf V \mathbf{\Sigma}^2 \mathbf{V}^T \mathbf c = \mathbf y^T \mathbf{\Sigma}^2 \mathbf{y} = \mathbf y^T \mathbf y##

where ##\mathbf{V}^T \mathbf c := \mathbf y##, and of course ##\big \Vert \mathbf c \big \Vert_2^2 = \big \Vert \mathbf{V}^T \mathbf c \big \Vert_2^2 = \big \Vert \mathbf y \big \Vert_2^2##

- - - - - - - -
##^{**}## to be more accurate: it is possible that that the squared singular values that non-zero elements of ##\mathbf y## gets scaled by are not one-- the exact constraint we need to observe is

## \mathbf y^T \mathbf{\Sigma}^2 \mathbf{y} = \sigma_1 ^2 y_1^2 + \sigma_2^2 y_2^2 +... + \sigma_n ^2 y_n^2 = y_1^2 + y_2^2 +... + y_n^2 =\mathbf y^T \mathbf y##
- - - - - - - -

There's also a lot of flexibility in determining ##\mathbf U##. If ##\mathbf X## is full rank, you can also tease out the constraint that, ##\mathbf c ^T \mathbf c = \mathbf c^T \mathbf U \mathbf{\Sigma}^{-2} \mathbf{U}^T \mathbf c = \mathbf w^T \mathbf{\Sigma}^{-2} \mathbf w = \mathbf w^T \mathbf w##, where ##\mathbf{U}^T \mathbf c := \mathbf w##.

Losing some generality, here's one particularly easy approach for creating a legal ##\mathbf X##: write out ##\mathbf C = \mathbf {PDP}^T##, where ##\mathbf D## is the zero matrix except the top left entry is given by ##trace\big(\mathbf C\big)##. From here ensure that all values of ##y_k = 0## for ##k \geq 2##, and that ##\sigma_1 = 1##, then set ##\mathbf U := \mathbf P##.

Again, this whole thing is a very much underdetermined problem so there's a lot of different ways to approach this.
 

1. What is a matrix equation?

A matrix equation is an equation that involves matrices, which are rectangular arrays of numbers or variables. These equations are used to represent and solve systems of linear equations.

2. How do you know if a matrix equation can be solved?

A matrix equation can be solved if the matrix is square (i.e. has the same number of rows and columns) and has a non-zero determinant. If the determinant is zero, the matrix equation cannot be solved.

3. What does it mean if a matrix equation has no solution?

If a matrix equation has no solution, it means that the system of equations represented by the matrix is inconsistent. This means that the equations are contradictory and cannot be satisfied by any values.

4. Can a matrix equation have multiple solutions?

Yes, a matrix equation can have multiple solutions. This means that there are multiple sets of values that can satisfy the equations in the system.

5. How do you solve a matrix equation?

To solve a matrix equation, you can use various methods such as Gaussian elimination, Cramer's rule, or matrix inversion. These methods involve manipulating the matrix to reduce it to a simpler form and solve for the variables.

Similar threads

Replies
24
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
873
  • Linear and Abstract Algebra
Replies
2
Views
756
  • Linear and Abstract Algebra
Replies
2
Views
459
  • Linear and Abstract Algebra
Replies
1
Views
924
  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
7
Views
2K
Back
Top