Matrices, Transposes, and Inverses. - Help?

  • Thread starter cAm
  • Start date
  • Tags
    Matrices
A(-1) of A.Cramer's Rule, when used to solve the system Ax = y, gives the solution "x" in terms of Determinants and related constructs, and thus does not give the matrix inverse A(-1) directly. the classical adjoint method, when used to solve for the inverse A(-1), gives the inverse A(-1) DIRECTLY in terms of Determinants and related constructs.The 2 methods are related in the sense that some of the determinants used in Cramer's Rule are the same as some of the determinants used in calculating the inverse A(-1) using the classical adjoint method.Question
  • #1
cAm
49
0
Matrices, Transposes, and Inverses. --- Help?

Say, you have Matrix A, Y, and a matrix of coefficients C not neccissarily square, and A * C = Y. To solve this, i would think to multiply both sides by the inverse of A, but since it's not necessarily square that won't work. I've seen a solution that I am having some questions about. It multiplies both sides by ATranspose, then it multiplies both sides by (A * ATranspose) ^-1. I think i understand that multiplication by ATranspose, so as to get it into a square matrix. Then, i guess i understand the multiplication by the inverse of it, but i think i read somewhere that it is particularly easier to take the inverse of A * Atranspose. I need the simplest method possible, because i have to code a program to solve this equation for me, in C++. I can figure out the code, if i can find a step by step process to finding the inverse, but I am really confused regarding taking inverses in general, and if it's easier if you multiply it by the transpose first.

I'd appreciate any help, and, if possible, i need this kinda soon, but even if the deadline passes, i still want to understand, for myself how it's done.
 
Physics news on Phys.org
  • #2
It is not easier. The reason this is done is that A may not have an inverse owing either to A being singular and square or A being not square. In many of those situations transpose(A)*A has an inverse. Note that this process does not always solve the original equation, but is the "best" we can due. In particular this process really solves (A*C-Y)^2=min((A*C-Y)^2). Say we wanted to solve
-a+b=-1
b=.001
a+b=1
in matrix form A*C=Y
A={{-1,1},{0,1},{1,1}}
Y={-1,.001,1}
we find inverse(transpose(A)*A)*transpose(A)*Y={1,.001/3}
This does not solve the original, but is a best guess given our information.
 
  • #3
Gaussian elimination
 
  • #4
neurocomp2003 said:
Gaussian elimination

=

bad deal for computer code, and bad deal for dealing with summation.


Basically what i have to do, is turn THIS: http://mathworld.wolfram.com/LeastSquaresFittingPolynomial.html

into C++ code...

edit: and thanks lurflurf, that helps a little bit, though I'm still not sure how i can turn it all into organized logic steps.
 
  • #5
cAm said:
bad deal for computer code, and bad deal for dealing with summation.
Gaussian elimination is very straight forward to code what is you trouble? What do you mean about summations?
 
  • #6
cAm said:
Say, you have Matrix A, Y, and a matrix of coefficients C not neccissarily square, and A * C = Y. To solve this, i would think to multiply both sides by the inverse of A, but since it's not necessarily square that won't work. I've seen a solution that I am having some questions about. It multiplies both sides by ATranspose, then it multiplies both sides by (A * ATranspose) ^-1. I think i understand that multiplication by ATranspose, so as to get it into a square matrix. Then, i guess i understand the multiplication by the inverse of it, but i think i read somewhere that it is particularly easier to take the inverse of A * Atranspose. I need the simplest method possible, because i have to code a program to solve this equation for me, in C++. I can figure out the code, if i can find a step by step process to finding the inverse, but I am really confused regarding taking inverses in general, and if it's easier if you multiply it by the transpose first.

I'd appreciate any help, and, if possible, i need this kinda soon, but even if the deadline passes, i still want to understand, for myself how it's done.
to determine (square) matrix inverses via computer algorithm, try Cramer's Rule.
it just involves evaluating determinants of square matrices, altho the method would require evaluating several different determinants for each matrix inversion required.
the computer algorithm for determinant evaluation can be programmed once with a "function" (or "subroutine", etc.) and then this same algorithm called over and over again.
 
Last edited:
  • #7
lurflurf said:
Gaussian elimination is very straight forward to code what is you trouble? What do you mean about summations?


If you look through the link i posted in my last, you'll see there are many sums from i = 1 to n, and I'm not really sure how to cancel those out using gaussian elimination.

I'll look into that geosonel, thanks.
 
  • #8
thought cramer's rule was bad compared to guassian...you still need teh inverse of a sqaure matrix...the idea (can't remember the process but its on the website you showed...is to makea square matrix so you can take the inverse...let me use the y=Ax notationa not the y=Xa notation they have there...

so how do you make a squrae matrix from Amn==> Amn*Amn^T thus what they have there is
to multiply both sides by A^T
A^T*y=A^T*A*x ...and then apply the inverse function to get X(that is to apply gaussian or which ever you choose). Thus you will be calculating
(A^T*A)^-1*A^T*y=x to solve for x..and hence you still need the inverse of a square function with this method...
 
  • #9
neurocomp2003 said:
thought cramer's rule was bad compared to guassian...you still need teh inverse of a sqaure matrix...the idea (can't remember the process but its on the website you showed...is to makea square matrix so you can take the inverse...let me use the y=Ax notationa not the y=Xa notation they have there...

so how do you make a squrae matrix from Amn==> Amn*Amn^T thus what they have there is
to multiply both sides by A^T
A^T*y=A^T*A*x ...and then apply the inverse function to get X(that is to apply gaussian or which ever you choose). Thus you will be calculating
(A^T*A)^-1*A^T*y=x to solve for x..and hence you still need the inverse of a square function with this method...
you mention several interesting topics:
(in all that follows, matrix A is assumed to be square and non-singular)
Question #1: how is Cramer's Rule applied to the system Ax = y related to the ACTUAL inverse matrix A(-1)?
Answer #1: in fact, Cramer's Rule is 1 of 2 related methods for matrix inversion using only Determinants and related constructs. One method (Cramer's) gives the solution "x" to Ax = y directly, without explicitly generating A(-1). The other, called the "Scaled Adjoint Matrix", generates the inverse matrix A(-1) using only the Determinant of A and determinant-related constructs called "cofactors". The latter "cofactors" are determinants of certain submatrices of the original A matrix. Again, each method has its place in a mathematical toolkit, depending on what's required:
Cramer's Rule ---- directly gives the solution "x" to Ax = y
Scaled Adjoint Matrix ---- gives the inverse matrix A(-1)

Question #2: which is better, Cramer's Rule or Gaussian Elimination?
Answer #2: of course, one can beg the question ... what is meant by "better"? ... easier to program, faster computation, more accurate ?
it's indeed difficult to generalize; however, many might agree with the following:
Cramer's Rule is "better" to use for smaller matrices, say 4x4 and smaller.
Gaussian Elimination is "better" to use for larger matrices, say > 4x4
Of course, your mileage may vary.
 
Last edited:
  • #10
cAm said:
If you look through the link i posted in my last, you'll see there are many sums from i = 1 to n, and I'm not really sure how to cancel those out using gaussian elimination.

I'll look into that geosonel, thanks.
Those sums are linear equations, thus gaussian elimination applies. You can chose the sum equations or the matrix equations as they are equivalent. You do not need both. How large of systems are you intending to use? I would recommend against crammers rule unless the systems are very small. What degree polynomial are you fitting? Also writing determinant code for large matricies is not easier than gaussian elimination anyway.
All that is needed is to find
[tex](A^tA)^{-1}A^tY[/tex]
so get the inverse with some gaussian elimination like algorithim
The trick is to think the procedure out carefuly. At each step what should the program do next. Also what types will you use float? double? a custom tpe? Using arrays? or Pointers? You could also side step the inverse and solve the system with something like LU decomposition.
 
  • #11
lurflurf said:
What degree polynomial are you fitting? Also writing determinant code for large matricies is not easier than gaussian elimination anyway.
agreed, Cramer's Rule does not scale "nicely" for larger matrices.
however, for a 3x3, the following line of code implemented in a callable function is all that's needed:

Det(m(i,j)) = m11*(m22*m33 - m32*m23) - m21*(m12*m33 - m32*m13) + m31*(m12*m23 - m22*m13)

this function would be called 4 times (each time with a different array m(i,j) ) to compute the solution "x" to Ax = y where A is 3x3 non-singular.
 
Last edited:
  • #12
geosonel said:
agreed, Cramer's Rule does not scale "nicely" for larger matrices.
however, for a 3x3, the following line of code implemented in a callable function is all that's needed:
If 4x4 and smaller matrices are all that is needed, might as well just input the general inverse. Or use crammer, but that would be a trivial problem.
 
  • #13
well, I'm looking at 50+ degree polynomials... what method do you think would work best for those?

(yeah, this is a bit of a crazy project...)
 
  • #14
cAm said:
well, I'm looking at 50+ degree polynomials... what method do you think would work best for those?

(yeah, this is a bit of a crazy project...)
Gaussian Elimination or a variation (Gauss-Jordon, partial pivots, full pivots ect.) 50-100 degree polynomials should not be bad. You are using a computer, not doing it by hand. Once you have your functions in order, the only limitations on the degree, will be running time, and memory usage. How will you get data to the program? I hope not by keying it in.
GaussianElimination mathworld entry
http://mathworld.wolfram.com/GaussianElimination.html
A comparison of methods
http://ceee.rice.edu/Books/CS/chapter5/cost6.html
 
  • #15
lurflurf said:
Gaussian Elimination or a variation (Gauss-Jordon, partial pivots, full pivots ect.) 50-100 degree polynomials should not be bad. You are using a computer, not doing it by hand. Once you have your functions in order, the only limitations on the degree, will be running time, and memory usage. How will you get data to the program? I hope not by keying it in.
GaussianElimination mathworld entry
http://mathworld.wolfram.com/GaussianElimination.html
A comparison of methods
http://ceee.rice.edu/Books/CS/chapter5/cost6.html

thanks.

and, no, heh, no way in hell are we keying in the data. we have a point scanner scanning in points in 3space, and our program finds the edges of the object, and finds the curves of the edges via regression.


edit: that's great, it takes it 2.9889 x 10^138 centuries to do a 100x100 matrix using cramer's rule.
 
Last edited:

What is a matrix?

A matrix is a rectangular array of numbers, symbols, or expressions arranged in rows and columns. It is used to represent and perform mathematical operations on a set of data or equations.

What is a transpose of a matrix?

The transpose of a matrix is a new matrix in which the rows and columns of the original matrix are swapped. This means that the first row of the original matrix becomes the first column of the new matrix, the second row becomes the second column, and so on.

What is an inverse of a matrix?

The inverse of a matrix is a new matrix that when multiplied by the original matrix, produces an identity matrix. In other words, the inverse of a matrix "cancels out" the original matrix, similar to how dividing by a number cancels out its multiplication.

Why are matrices, transposes, and inverses important in science?

Matrices, transposes, and inverses are important in science because they provide a powerful tool for solving systems of equations, representing and manipulating data, and performing transformations in various fields such as physics, engineering, and computer science.

How can I find the transpose and inverse of a matrix?

To find the transpose of a matrix, simply swap the rows and columns of the original matrix. To find the inverse of a matrix, you can use various methods such as Gaussian elimination, determinants, or using matrix operations. There are also online calculators and software that can quickly find the transpose and inverse of a matrix for you.

Similar threads

  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Introductory Physics Homework Help
Replies
5
Views
1K
  • Introductory Physics Homework Help
Replies
10
Views
2K
  • Introductory Physics Homework Help
Replies
4
Views
754
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Introductory Physics Homework Help
Replies
2
Views
968
  • Calculus and Beyond Homework Help
Replies
25
Views
2K
  • Introductory Physics Homework Help
Replies
2
Views
2K
Replies
5
Views
3K
  • General Math
Replies
13
Views
1K
Back
Top