Multiplication bloards after factorization

yiorgos
Messages
18
Reaction score
0
Let a positive definite matrix A be factorized to P and Q, A=P*Q and let an arbitrary matrix B.
I am calculating the relative error of the factorization through the norm:

\epsilon = \left\| \textbf{A}-\textbf{PQ} \right\| / \left\| \textbf{A} \right\|

which gives

\epsilon <1\text{e}-16

so I assume factorization is correct.

But things go messy when I try to multiply the factorized form of A with B.
In particular, the relative error, r, of the product

r = \left\| \textbf{AB}-\textbf{PQB} \right\| / \left\| \textbf{AB} \right\|

now bloats, i.e. I get
r>0.1.

Note that B is arbitrary, in particular I have tried several different types: random, structured, all-ones matrix, even the identity matrix.
I'm confused. How come factorization is correct and then the multiplication bloats?
Has anything to do with condition number?

(Unfortunately I can't disclose the type of factorization but I can tell that P and Q are not triangular)
 
Physics news on Phys.org
Modification: I mistakenly added also "identity matrix" to previous post. Please ignore this from the list of matrices I have tried.
 
It's hard to know exactly what you are doing since you won't tell us all the facts, but I think the basic issue is the difference between "forward" and "backward" error analysis.

If you are trying to solve ##Ax = b## numerically, you can estimate the error two different ways.

Forward error analysis: try to estimate the error in ##x##, i.e assume the exact solution is ##A(x + e) = b ## and find an estimate for the vector ##e##.
Backward error analysis: consider you have the exact solution to the "wrong" equation, i.e. estimate the size of a matrix ##e## such that ##(A+e)x = b##.

Backward error analysis (first proposed by Wilkinson in the 1960s) is generally more useful than the more "obvious" forward analysis.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top