Solutions of Matrix Equation A X B^T = C

• A
• quark002
In summary, the conversation discusses the use of LU decomposition to solve a matrix equation, as well as the use of permutation matrices and the kronecker product in simplifying the problem. The final equation is rewritten in terms of a big LU decomposition, making use of the distributivity of kronecker products.
quark002
TL;DR Summary
For given matrices ##A##, ##B## and ##C##, solve the equation $$AXB^T = C$$ for ##X## in terms of the LU decompositions of ##A## and ##B##. When are there no solutions?
I know that every ##m×n## matrix ##D## can be expressed in the form

$$P_1DP_2 = LU$$

where ##P_1## and ##P_2## are permutation matrices, ##L## is a unit lower triangular matrix, and ##U## has the form

$$\begin{bmatrix} U_1 & U_2 \newline 0 & 0 \newline \end{bmatrix}$$

where ##U_1## is an invertible upper triangular matrix (i.e. ##U_1## has nonzero diagonal entries) with the same rank as ##D##.

Furthermore, some useful properties of permutation matrices include ##P_1^{-1}=P_1^T##.

Let the LU decompositions of ##A## and ##B## be

$$P_1 A P_2 = L_A U_A, \quad Q_1 B Q_2 = L_B U_B.$$

Then we can write the matrix equation that we are supposed to solve as follows:

$$P_1^T L_A U_A P_2^T X Q_2 U_B^T L_B^T Q_1 = C.$$

My plan was to try writing this equation in the form ##GY=H## for known ##G## and ##H##, and unknown ##Y## (without assuming that ##U_A## and ##U_B## are invertible). This idea leads to

$$U_A P_2^T X Q_2 U_B^T = L_A^{-1} P_1 C Q_1^T (L_B^T)^{-1} .$$

$$G=U_A, \quad H = L_A^{-1} P_1 C Q_1^T (L_B^T)^{-1}, \quad Y=P_2^T X Q_2 U_B^T .$$

Am I on the right track? Where do I go from here?

I guess it's ok.

1.)If it were me I'd clear out the permutation matrices from the get go (/WLOG assume they aren't needed) and use associativity of matrix multiplicatyion, i.e. in effect consider

##P_1^T L_A U_A P_2^T X Q_2 U_B^T L_B^T Q_1 = C##
## L_A U_A \big(Y\big)U_B^T L_B^T = L_A U_A \big(P_2^T X Q_2\big)U_B^T L_B^T =\big(P_1 C Q_1^T\big) = Z##

2.) I'm not sure if this is a homework problem. If it is you are probably constrained in what you can do here algebraicly-- and the problem would belong in homework forum.

If it is not a homework problem, since this thread is tagged A I'd strongly suggest using the kronecker product at this stage and in particular the identity

##\text{vec}\big(\mathbf {AXB}^T\big) = \big(\mathbf B \otimes \mathbf A\big)\text{vec}\big(\mathbf {X}\big)=\text{vec}\big(\mathbf {C}\big)##

where the vec operator stacks a matrix's columns one at a time on top of each other.

From here, if your field is ##\mathbb R## or ##\mathbb C## you can use SVD and how well kronecker products work with it. (Ok this basically deprecates LU factorization)

Alternatively if you are willing to possibly play around a bit with dimensions and zero pad some of the matrices so that you have legal matrix multiplications, I'd finish with distributivity of kronecker products, i.e. use the fact that
##\big(\mathbf A \otimes \mathbf B \big)\big(\mathbf M \otimes \mathbf N\big) = \big(\mathbf{AM} \big)\otimes \big(\mathbf{BN}\big)##

then use the fact that a lower triangular matrix ##\otimes## lower triangular matrix gives a lower triangular matrix (and the same phenomenon for upper triangular matrices)

i.e. the final equation should become
##\text{vec}\big(\mathbf {AXB}^T\big) = \big(\mathbf L_\text{Big} \mathbf U_\text{Big}\big)\text{vec}\big(\mathbf {X}\big)=\text{vec}\big(\mathbf {C}\big)##

which is how I'd like to read the problem off.
- - - -
Note the kronecker product here is very natural since it is a (tractable) form of tensor product between matrices and what I'm suggesting you do in effect is to linearize a bilinear map, and convert your problem from fragments of different LU decompositions into one nice clean big LU decomposition. for more info on kronecker products (and closely related kronecker sum) see this post:
https://www.physicsforums.com/threa...vector-subspace-of-r-2-2.929266/#post-5868072

which includes an (indirect) link to a free chapter on them

1. What is a matrix equation?

A matrix equation is an equation that involves matrices, which are rectangular arrays of numbers or variables. In a matrix equation, the goal is to find the values of the variables that satisfy the equation.

2. How is a matrix equation solved?

A matrix equation is solved by using various mathematical operations, such as addition, subtraction, multiplication, and division, to manipulate the matrices and find the values of the variables that satisfy the equation. This process is similar to solving algebraic equations.

3. What is the purpose of solving a matrix equation A X B^T = C?

The purpose of solving a matrix equation A X B^T = C is to find the values of the variables that satisfy the equation. This can be useful in various applications, such as solving systems of linear equations, solving optimization problems, and performing transformations in geometry.

4. What are the requirements for a matrix equation A X B^T = C to have a solution?

A matrix equation A X B^T = C will have a solution if the number of columns in matrix A is equal to the number of rows in matrix B, and the resulting matrix has the same dimensions as matrix C. In other words, the inner dimensions of the matrices must match.

5. Can a matrix equation A X B^T = C have multiple solutions?

No, a matrix equation A X B^T = C can only have one unique solution. This is because matrices follow the rules of linear algebra, where each variable has a unique value that satisfies the equation. If there are multiple solutions, then the equation is not a true matrix equation.

Replies
2
Views
910
Replies
1
Views
2K
Replies
3
Views
4K
Replies
6
Views
2K
Replies
1
Views
2K
Replies
7
Views
1K
Replies
6
Views
1K
Replies
1
Views
966
Replies
7
Views
891
Replies
4
Views
1K