Inverse of Upper Triangular Matrix

In summary: A. This is done by solving the following equation: C.A^-1.C'Wb. By solving for C we can eliminate the worst offender from the network and re-solve.
  • #1
Physics_wiz
228
0
Does anyone know a formula to find the inverse of an upper triangular matrix of dimension n (with a reference preferred)?
 
Physics news on Phys.org
  • #2
isn't row reduction how you find inverses? as in augment the matrix with the correspondent identity matrix and row reduce
 
  • #3
Yes, I understand. However, I am trying to find a general formula for upper triangular matrices. Something along the lines of the inverse formula for 2x2 matrices. I remember my linear algebra teacher telling us that formulas like that exist for higher dimension matrices.
 
  • #4
Do you remember the analytic formula for the inverse of a matrix from that linear algebra class? Note that the determinant of an upper triangular matrix is just the product of the diagonal components (which are the eigenvalues of the matrix). The cofactors are either 0 or signed determinants of smaller upper triangular matrices.
 
  • #5
For those of you, like me, that just want the answer... here it is (Slider142, please read the item at the bottom re: cofactors and why that doesn't work in this case, sorry):

My full problem was as follows:

Given a set of observations and a fixed start-point - in this case acoustic ranges, magnetic and true bearings and some angles, from a ship at sea to towed cables behind it - solve the position in x and y of all other nodes in a network down the length of those cables. For those familiar with surveying, it's a classic triangulation network, using the variation of co-ordinates method of least squares.

Underlying this is the fact that this network is VERY large, needs to be solved millions of times during a survey and at a minimum, once every few seconds as a boat moves forward acquiring data in real time. There are more than 10,000 observations and more than 1500 nodes to solve.

I firstly designed a Normal Matrix, A (for those interested 'A' is actually C'WC where C is a design matrix of observations).

Then, having obtained 'A' I formed 'b' - a simple vector (again, for those interested, 'b' is actually A'Wb where this 2nd 'b' is just the observed minus computed observations for the current estimate of node positions)

What I need is x, in Ax=b, where x is a vector of the corrections I need to make to positions of the nodes from their currently estimated position.

Having found x, I will then correct the node positions by this amount, go back to the beginning and re-iterate until such time as the node positions are not changing.

Now, and here's the rub - no observation is ever perfect, and eliminating 'bad' or unacceptaby bad observations requires a solution to the inverse of A to be found and used.

Bugger!

(again, for those interested, I need a solution of C.A^-1.C', to work out the normalised residuals, or w statistic, and eliminate the worst offender from the network and re-solve)

In summary:

I needed the least squares solution of a system Ax=b AND the inverse of the Sparse, positive Definite, symmetric Matrix, A, to enable data snooping using the w-test, which can only be done by multiplying items by the inverse of A. I then need to eliminate bad data and re-compute the network from the beginning. That's the problem. Within the solution I'll need to compute the inverse of a triangular matrix.

My solution
----------
Start with an initial approximation of the node positions, and set x to zero.

1. Firstly, compute A - which is sparse, positive definite and symmetric - and store it. Computing and storing it is most efficiently achieved by using CSR (Compressed Sparse Row) format - essentially only storing non-zero values and indexes instead of the whole matrix.

2. Compute b - store this as a vector.

3. Apply a matrix row re-ordering algorithm to minimise the work required in the next step, Cholesky decomposition. I used Banker's algorithm for this - generally recognised as being better than Reverse Cuthill-McKee (RCM)

4. Decompose A, using Cholesky decomposition - but store the result as a simple vector of values of a triangular matrix, including zeros. This is because in computing the Cholesky decomposition the same structure as the original matrix is not preserved and so a large amount of manipulation and data 'insertion' (which is time consuming) can be avoided simply by storing it as a simple vector.

5. Take the matrix stored as a vector in 4 and put it into CSR format for the next step.

6. Use Forward and Backward Substitution to solve for x, using CSR format thoughout and storing the result, x, as a simple vector. It is important to use CSR format for this, otherwise it becomes very time consuming.

7. Having solved for x, apply the corrections to node positions and go back to the beginning, using these updated positions for all nodes as the new estimate of position and put x back to zero.

8. When no further useful changes in position are occurring - or the solution has converged - then the work starts... we now need to assess an estimate of the error from each observation. the w-statistic needs to be computed.

this is where The inverse is needed - we compute it using the latest value of the Cholesky factor, determined above - let's call it L - which is Triangular.

The inverse of A is the inverse of L (call it Li) multiplied by it's own transpose, Li.Li'

Here's where the inverse of a triangular matrix comes in, as L is triangular - but I simply don't have the time to do a naive solution - I need the fastest available because my L is over 1500 rows and may be up to 3000 rows long.

So, store L as CSR format and use the fact that ONE VECTOR (one column) of the inverse can be computed from FORWARD SUBSTITUTION as follows:

Let L.Li = I where I is one vector of the identity matrix...

(Any Matrix, times it's own inverse, is the identity matrix...)

For One vector of Li, use forward substitution to solve L x Li = [1,0,0,0,0...0] - Store the resulting vector as the 1st column of Li...

Then use the next vector, [0,1,0,0,0,0...0], and the next [0,0,1,0,0,0,0...0] and so on, building up a solution to Li column by column.

The use of CSR format in this is vital to keep the computation time to a minimum.

This works because forward substitution, when using CSR format on a sparse matrix, is very fast.

et voila, we have the Inverse of a triangular matrix using the minimum possible flops (or at least I believe it to be the minimum possible... ANYONE who knows a faster way, PLEASE, PLEASE, PLEASE let me know!

The next problem, for me, is to compute Li.Li' - ie. a Triangular x it's own transpose in the minimum possible time, ie. the minimum number of flops.

This seemingly trivial, final operation, actually takes more time than the entire of the solution above when the matrix is large (which it always is.)

One final thing, for those interested - if you are wondering why I didn't use Conjugate Gradients for the least squares solution... it's because although that IS faster than cholesky decomposition, it is numerically unstable when computing the inverse as a sum of it's components - rounding error really kills it. Otherwise it would be beautiful, and quick. if anyone knows a way of stabilising CG, please let me know!

Well, that's it - I genuinely hope to have been of use to someone out there... it's taken me quite a while to hammer out the method above. Unlike many of you, I'm not a student or professor - just a guy trying to do a job in the real world. So please, if it is useful or if you see something I missed, or a way of improving it, please DO let me know.

Cheers,

Mike Li, from New Zealand.
finz@CyberXpress.co.nz

slider142 said:
Do you remember the analytic formula for the inverse of a matrix from that linear algebra class? Note that the determinant of an upper triangular matrix is just the product of the diagonal components (which are the eigenvalues of the matrix). The cofactors are either 0 or signed determinants of smaller upper triangular matrices.
 
  • #6
Sorry Slider142 - got carried away and forgot to say why cofactors don't work quite as simply as you suggest here...

When you form the cofactor by eliminating a row/column, some of the resulting matricies are not triangular any more - which means the determinant needs to be found of a full matrix, not a simple triangular one. For the others you're right, they're either 0 or the product of the diagonals.

That makes the solution a whole lot more complex... consequently it's quicker and easier to use forward substitution to find the inverse of a matrix, one vector at a time from L.Li = I where L is a triangular matrix and Li is 1 vector of it's inverse and I is one vector from the identity matrix.

Cheers,

Mike Li.
 

What is an upper triangular matrix?

An upper triangular matrix is a special type of square matrix where all the elements below the main diagonal (from top left to bottom right) are zero.

What is the inverse of an upper triangular matrix?

The inverse of an upper triangular matrix is another upper triangular matrix that, when multiplied with the original matrix, results in an identity matrix. In other words, the inverse of an upper triangular matrix "undoes" the original matrix.

How do you find the inverse of an upper triangular matrix?

The inverse of an upper triangular matrix can be found by using the Gauss-Jordan elimination method. This involves performing elementary row operations to transform the matrix into an identity matrix, and then applying the same operations to an initially empty identity matrix to obtain the inverse.

Can an upper triangular matrix always be inverted?

Yes, an upper triangular matrix can always be inverted as long as it is a nonsingular matrix, meaning it has a nonzero determinant. If the determinant is zero, the matrix is singular and does not have an inverse.

Why is the inverse of an upper triangular matrix useful?

The inverse of an upper triangular matrix is useful because it can be used to solve systems of linear equations and perform other calculations. It is also helpful in finding the eigenvalues and eigenvectors of a matrix, which have various applications in fields such as physics and engineering.

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
412
  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
34
Views
2K
  • Linear and Abstract Algebra
Replies
14
Views
1K
  • Linear and Abstract Algebra
Replies
20
Views
1K
  • Linear and Abstract Algebra
Replies
34
Views
2K
  • Linear and Abstract Algebra
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
811
Back
Top