MHB A and solution are known find B matrix

  • Thread starter Thread starter TomSavage
  • Start date Start date
  • Tags Tags
    Matrix
TomSavage
Messages
4
Reaction score
0
I have the matrix of A

1 2 -1
2 -1 1

and i am asked if there is any B matrix that can make AB = 1-1
1 1

I assume that this is not possible because if we follow the law of Ax=B then {A}^{-1} * B =x and since matrix a is singular then it cannot be inverted and thus this operation is impossible. Am I wrong in thinking this?
 
Physics news on Phys.org
TomSavage said:
I have the matrix of A

1 2 -1
2 -1 1

and i am asked if there is any B matrix that can make AB = 1-1
1 1

I assume that this is not possible because if we follow the law of Ax=B then {A}^{-1} * B =x and since matrix a is singular then it cannot be inverted and thus this operation is impossible. Am I wrong in thinking this?

Hi TomSavage! Welcome to MHB! ;)

I'm afraid that's not quite true.

We can solve
$$\begin{pmatrix}
1 &2 &-1\\
2 &-1 &1
\end{pmatrix}\begin{pmatrix}
b_{11}\\b_{21}\\b_{31}
\end{pmatrix} = \begin{pmatrix}
1\\1
\end{pmatrix}$$
can't we?

It is under determined so that it has infinitely many solutions.
We can solve it with Gaussian elimination and pick for instance:
$$\begin{pmatrix}
b_{11}\\b_{21}\\b_{31}
\end{pmatrix} = \begin{pmatrix}
3/5\\1/5\\0
\end{pmatrix}$$

Similarly we can solve:
$$\begin{pmatrix}
1 &2 &-1\\
2 &-1 &1
\end{pmatrix}\begin{pmatrix}
b_{12}\\b_{22}\\b_{32}
\end{pmatrix} = \begin{pmatrix}
-1\\1
\end{pmatrix}$$

Combining the result with the first solution we get:
$$\begin{pmatrix}
1 &2 &-1\\
2 &-1 &1
\end{pmatrix}\begin{pmatrix}
3/5 &0 \\1/5 &0 \\0 & 1
\end{pmatrix} = \begin{pmatrix}
1 & -1\\1 & 1
\end{pmatrix}$$

There you go, a solution for $B$.

More generally, we can find a solution in the least-square sense with:
$$B=A^+ \begin{pmatrix}
1 & -1\\1 & 1
\end{pmatrix}$$
where $A^+$ is the Moore-Penrose pseudoinverse.
The wiki article also explains how we can use it to find all solutions.
 
Klaas van Aarsen said:
Hi TomSavage! Welcome to MHB! ;)

I'm afraid that's not quite true.

We can solve
$$\begin{pmatrix}
1 &2 &-1\\
2 &-1 &1
\end{pmatrix}\begin{pmatrix}
b_{11}\\b_{21}\\b_{31}
\end{pmatrix} = \begin{pmatrix}
1\\1
\end{pmatrix}$$
can't we?

It is under determined so that it has infinitely many solutions.
We can solve it with Gaussian elimination and pick for instance:
$$\begin{pmatrix}
b_{11}\\b_{21}\\b_{31}
\end{pmatrix} = \begin{pmatrix}
3/5\\1/5\\0
\end{pmatrix}$$

Similarly we can solve:
$$\begin{pmatrix}
1 &2 &-1\\
2 &-1 &1
\end{pmatrix}\begin{pmatrix}
b_{12}\\b_{22}\\b_{32}
\end{pmatrix} = \begin{pmatrix}
-1\\1
\end{pmatrix}$$

Combining the result with the first solution we get:
$$\begin{pmatrix}
1 &2 &-1\\
2 &-1 &1
\end{pmatrix}\begin{pmatrix}
3/5 &0 \\1/5 &0 \\0 & 1
\end{pmatrix} = \begin{pmatrix}
1 & -1\\1 & 1
\end{pmatrix}$$

There you go, a solution for $B$.

More generally, we can find a solution in the least-square sense with:
$$B=A^+ \begin{pmatrix}
1 & -1\\1 & 1
\end{pmatrix}$$
where $A^+$ is the Moore-Penrose pseudoinverse.
The wiki article also explains how we can use it to find all solutions.
Hey, I now understand mostly everything but what I don't know is how you got the solutions for B31 and B32. Did you get the other 4 solutions from gauss jordan and then just put the other two in that satisfy the solution or is there a direct mathematical method to get them because wouldn't adding the (3X1) solution set into the (2x3) matrix mess things up?
 
TomSavage said:
Hey, I now understand mostly everything but what I don't know is how you got the solutions for B31 and B32. Did you get the other 4 solutions from gauss jordan and then just put the other two in that satisfy the solution or is there a direct mathematical method to get them because wouldn't adding the (3X1) solution set into the (2x3) matrix mess things up?

Let me show you how to solve:
$$\begin{pmatrix}
1 &2 &-1\\
2 &-1 &1
\end{pmatrix}\begin{pmatrix}
b_{11}\\b_{21}\\b_{31}
\end{pmatrix} = \begin{pmatrix}
1\\1
\end{pmatrix}$$
with Gaussian elimination.

Take the first row, multiply it with $-2$, and add it to the second row.
Consequently we get a new system that has the same solutions as the previous system.
The result is:
$$\begin{pmatrix}
1 &2 &-1\\
0 &-5 &3
\end{pmatrix}\begin{pmatrix}
b_{11}\\b_{21}\\b_{31}
\end{pmatrix} = \begin{pmatrix}
1\\-1
\end{pmatrix}$$

Starting from the bottom, we pick $b_{31}=0$, and see what happens afterwards.
To solve the second equation, we need $b_{21}=\frac 15$ now.
Then we solve the first equation, using the values we found so far, and we can with $b_{11}=\frac 35$.

There you go. We found one of the infinite number of solutions: $(b_{11},b_{21},b_{31})=(\frac 35, \frac 15, 0)$.Alternatively, we might have started with $b_{21}=0$.
Then we need $b_{31}=-\frac 13$ to solve the second equation.
And finally $b_{11}=\frac 23$ to solve the first equation for the solution $(b_{11},b_{21},b_{31})=(\frac 23, 0, -\frac 13)$.
Note that these 2 solutions together 'span' the solution space.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top