Multiplication bloards after factorization

Click For Summary
SUMMARY

The discussion centers on the challenges faced when multiplying a factorized positive definite matrix A, represented as A = P*Q, with an arbitrary matrix B. Despite achieving a relative error of ε < 1e-16 in the factorization, the multiplication results in a significant relative error r > 0.1. The confusion arises from the difference between forward and backward error analysis, with backward error analysis being highlighted as a more effective approach for understanding the discrepancies in results.

PREREQUISITES
  • Understanding of positive definite matrices
  • Familiarity with matrix factorization techniques
  • Knowledge of error analysis in numerical methods
  • Proficiency in matrix norms and their properties
NEXT STEPS
  • Study backward error analysis in numerical linear algebra
  • Explore the implications of condition numbers on matrix multiplication
  • Investigate different matrix factorization methods beyond triangular forms
  • Learn about the properties and applications of positive definite matrices
USEFUL FOR

Mathematicians, data scientists, and engineers involved in numerical analysis, particularly those working with matrix computations and error analysis in algorithms.

yiorgos
Messages
18
Reaction score
0
Let a positive definite matrix A be factorized to P and Q, A=P*Q and let an arbitrary matrix B.
I am calculating the relative error of the factorization through the norm:

\epsilon = \left\| \textbf{A}-\textbf{PQ} \right\| / \left\| \textbf{A} \right\|

which gives

\epsilon &lt;1\text{e}-16

so I assume factorization is correct.

But things go messy when I try to multiply the factorized form of A with B.
In particular, the relative error, r, of the product

r = \left\| \textbf{AB}-\textbf{PQB} \right\| / \left\| \textbf{AB} \right\|

now bloats, i.e. I get
r&gt;0.1.

Note that B is arbitrary, in particular I have tried several different types: random, structured, all-ones matrix, even the identity matrix.
I'm confused. How come factorization is correct and then the multiplication bloats?
Has anything to do with condition number?

(Unfortunately I can't disclose the type of factorization but I can tell that P and Q are not triangular)
 
Physics news on Phys.org
Modification: I mistakenly added also "identity matrix" to previous post. Please ignore this from the list of matrices I have tried.
 
It's hard to know exactly what you are doing since you won't tell us all the facts, but I think the basic issue is the difference between "forward" and "backward" error analysis.

If you are trying to solve ##Ax = b## numerically, you can estimate the error two different ways.

Forward error analysis: try to estimate the error in ##x##, i.e assume the exact solution is ##A(x + e) = b ## and find an estimate for the vector ##e##.
Backward error analysis: consider you have the exact solution to the "wrong" equation, i.e. estimate the size of a matrix ##e## such that ##(A+e)x = b##.

Backward error analysis (first proposed by Wilkinson in the 1960s) is generally more useful than the more "obvious" forward analysis.
 

Similar threads

Replies
19
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 21 ·
Replies
21
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 17 ·
Replies
17
Views
7K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K