# Solve Matrices System: Help with I, A, b, C

• debro5
In summary: This can be generalized to any system of linear equations in two variables. To solve it, you need to find the matrix A such thatAx = bFor this system, the only invertible matrix is A = I, so the solution is simplyx = 10 I
debro5
I have this system :
$$(A_1 \quad A_2 \quad A_3 \;...)\left( {\matrix{ {b_1 } \cr {b_2 } \cr {b_3 } \cr {...} \cr } } \right) = C$$

where the A's are matrices that forms a vector, b is a vector and C a matrix. If I know C and the A's. How can I find the b's?

$$\left( {\matrix{ {b_1 } \cr {b_2 } \cr {b_3 } \cr {...} \cr } } \right) = (A_1^{ - 1} \quad A_2^{ - 1} \quad A_3^{ - 1} \;...)C$$

Surely not, this give another matrix. The A's are square but not necessarely invertable...

Last edited by a moderator:
You'll have to be a bit more specific about the types of your vectors/matrices.

A's and B are n x n real matrices...

There is no B in what you wrote. But there are (so far undefined) bi's and a C.And currently, that expression is gibberish. On the left hand side, your multiplicand is a 1x? array, and your multiplier is a 1x? array. (Why don't you tell us the number of entries?) But multiplication is only defined when the number of columns in the multiplicand equals the number of rows in the multiplier.

And unless the bi's happen to be nx1 arrays, multiplication of the individual elements of your two big arrays doesn't make sense.
Oh, I've looked at your source code, I think you meant:

$$\left( \begin{array}{c c c} A_1 & A_2 & \cdots \end{array} \right) \cdot \left( \begin{array}{c} b_1 \\ b_2 \\ \vdots \end{array} \right) = C$$

(click the image to see how to draw it yourself)This is better, since this multiplication is defined (assuming each bi is an nx1 array of numbers, and C is an nx1 array of numbers, and each ellipsis represents the same number of omitted entries). All you have to do is to forget the partitions; this is an ordinary matrix * vector = vector problem. The solution (if one exists) won't be unique, though.

Last edited:
sorry, should have been C...

In fact, the A's represents the gradient of a magnetic field over a given array of current carrying coils. Each coil carry a current b. The A's are numerically computed and I already know the final C. So I have to find the b currents...
Hope that helps...

Yes, that is what I meant. But each A is a matrix. So this is not a simple matrix*vector=vector problem, it's more of
a sum over i(matrix(i)*scalar(i))=matrix...

Last edited:
(see the addition to my previous post)

Incidentally, if you want to keep it in block form, then you can just find one A that's invertible, then you can multiply by its inverse.

For example, if I have
$$\left( \begin{array}{c c c} A & B & C \end{array} \right) \cdot \left( \begin{array}{c} x \\ y \\ z \end{array} \right) = v$$

and A is invertible, then I can left multiply by its inverse:

$$\left( \begin{array}{c c c} I & A^{-1}B & A^{-1}C \end{array} \right) \cdot \left( \begin{array}{c} x \\ y \\ z \end{array} \right) = A^{-1}v$$

and I can read off the solutions in the same way I would for the ordinary case -- my pivot was the first column, so I can pick any value I want for y and z, and then x is uniquely determined.

If none of A, B, or C are invertible, then you have to do some tricky stuff to stay in the block form.

This is a better formulation of the problem:

$$\sum\limits_{i = 1}^N {A_i } b_i \; = \;C$$

...each $$A_i$$ is a matrix...

solution is not unique, but I'm trying to find the b's that minimize:

$$C - \sum\limits_{i = 1}^N {A_i } b_i \;$$

Last edited:
That's why you have to say what things are -- I would never have guessed that the bi's were scalars!If you were looking to actually solve that system, then you can turn it into a matrix * vector = vector problem with some rearranging. I'll demonstrate on a smaller example:

If you have the equation
$$x \left( \begin{array}{cc} 1 & 2 \\ 3 & 4 \end{array} \right) + y \left( \begin{array}{cc} 5 & 6 \\ 7 & 8 \end{array} \right) = \left( \begin{array}{cc} 9 & 10 \\ 11 & 12 \end{array} \right)$$

then the matrix structure is irrelevant: it's just an array of 4 numbers, and this can be unfolded into a system of four scalar equations. Refolding them into a matrix * vector = vector product, you have:

$$\left( \begin{array}{cc} 1 & 5 \\ 2 & 6 \\ 3 & 7 \\ 4 & 8 \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right) = \left( \begin{array}{c} 9 \\ 10 \\ 11 \\ 12 \end{array} \right)$$But you seem to suggest that you don't want to solve it, but approximately solve it. Well, you'll have to first come up with a metric on the solution space. I suspect you'll first need to convert the problem into a matrix * vector = vector problem, as I did above. A common way to measure how close a solution is, is simply the length of the error vector:

If we're trying to solve Ax = b, and we have a candidate solution y, then the (squared) error is given by

$$e(y)^2= ||Ay - b||^2 = (Ay - b)^T(Ay - b) = y^T A^T A y - b^T A y - y^T A^T b + b^T b = y^T A^T A y - 2 y^T A^T b + b^T b$$To minimize, we set the derivative to zero:

$$0 = 2 A^T A y - 2 A^T b$$

which means that the "best" solution to Ax = b is the actual solution to

$$A^T A y = A^T b$$

Last edited:
Yes, thank you. I didn't think of the unfolding procedure. This will help.
The part with the error I already knew, but it's okay. So to find the b's, I could now try to invert A or try to find it's pseudo-invert?

## 1. What is a matrix system?

A matrix system refers to a set of equations represented in matrix form. It is used to solve systems of linear equations by using matrix operations such as addition, subtraction, multiplication, and inversion.

## 2. How do I solve a matrix system?

To solve a matrix system, you can use various methods such as Gaussian elimination, Cramer's rule, or matrix inversion. These methods involve manipulating the matrix equations to isolate the variables and solve for their values.

## 3. What is the role of the matrix I in solving a matrix system?

The matrix I, also known as the identity matrix, is a square matrix with 1's on the main diagonal and 0's everywhere else. It is used in matrix operations to multiply with another matrix without changing its values. In solving a matrix system, the identity matrix is used to invert the coefficient matrix, which is necessary for some methods of solving.

## 4. How do I represent a matrix system in Python?

In Python, you can use the numpy library to represent a matrix system as a numpy array. You can also use the built-in functions in numpy to perform matrix operations and solve the system. For example, numpy.linalg.solve() can be used to solve a matrix system.

## 5. Are there any real-life applications of solving matrix systems?

Yes, solving matrix systems has various real-life applications in fields such as engineering, physics, economics, and computer graphics. It is used to model and solve complex systems of equations in these fields. For example, in engineering, matrix systems are used to analyze electric circuits and solve for unknown currents and voltages.

• Linear and Abstract Algebra
Replies
7
Views
920
• Linear and Abstract Algebra
Replies
1
Views
749
• Set Theory, Logic, Probability, Statistics
Replies
62
Views
3K
• Linear and Abstract Algebra
Replies
6
Views
976
• Linear and Abstract Algebra
Replies
8
Views
2K
• Linear and Abstract Algebra
Replies
1
Views
1K
• Linear and Abstract Algebra
Replies
2
Views
2K
• Calculus and Beyond Homework Help
Replies
7
Views
780
• Linear and Abstract Algebra
Replies
34
Views
2K
• Linear and Abstract Algebra
Replies
8
Views
1K