Badly Scaled Matrix? Explained with Examples

  • Context: Undergrad 
  • Thread starter Thread starter sunny110
  • Start date Start date
  • Tags Tags
    Matrix
Click For Summary

Discussion Overview

The discussion revolves around the concepts of "badly scaled" and "nearly singular" matrices, exploring their definitions, implications, and examples. Participants seek clarification on these terms and their relationship to numerical issues in matrix computations, including the distinction between badly scaled and ill-conditioned matrices.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants define "nearly singular" as a matrix with a determinant close to zero, leading to potential numerical instability and round-off errors.
  • Others explain that a "badly scaled" matrix has elements of vastly different magnitudes, which can cause loss of precision in computations.
  • A participant presents an example where a small determinant does not necessarily indicate a problematic matrix, using identity matrices scaled by small factors.
  • Discussion includes the idea that the mapping of a matrix on the unit sphere can indicate its near-singularity, with eccentricity of the resulting ellipsoid being a measure of this property.
  • Some participants express confusion about specific terms and request further clarification or references for deeper understanding.
  • There is a discussion about the relationship between badly scaled matrices and ill-conditioned matrices, with some suggesting that bad scaling can lead to ill conditioning, while others argue they are not equivalent.
  • Examples are provided to illustrate how different scaling choices can affect the conditioning of a matrix.
  • Participants question the numerical problems that may arise from these issues, seeking specific examples.

Areas of Agreement / Disagreement

Participants generally agree on the definitions of badly scaled and nearly singular matrices but disagree on the equivalence of badly scaled and ill-conditioned matrices. The discussion remains unresolved regarding the precise relationship between these concepts.

Contextual Notes

Some participants note that the definitions and implications of badly scaled and ill-conditioned matrices may depend on specific contexts, such as the choice of units or the numerical values involved.

sunny110
Messages
11
Reaction score
0
badly scaled Matrix?!

Hello,

Scilab help states that If a matrix is badly scaled or nearly singular, a warning message will be displayed:

"matrix is close to singular or badly scaled." (http://help.scilab.org/docs/5.3.3/en_US/inv.html)


What do these terms mean? "well scaled" , "badly scaled" , "nearly singular"

Can anyone please give explanation with example to me?

Thanks in advance.
 
Physics news on Phys.org


"Nearly singular" means that the determinant is very near to 0. Just as trying to "divide" by a matrix whose determinant is 0 would be equivalent to "dividing by 0" giving "infinite" answers, so trying to "divide" by a nearly singular matrix will give extremely large answers, perhaps larger than the software can handle, but even if not, causing round off errors that "swamp" other values and give incorrect results.

A matrix, or other problem, is "badly scaled" when some numbers in the problem are so much larger than the other that they cannot be kept in memory to the same accuracy, causing some information to be lost.
 


Thank you professor. You explain it very well. I have a little question:

What is your meaning of "other problem"?
 


Matrices with small determinants are not always problematic. For example, if M_1 = I (the identity matrix) and M_2 = 0.001 I (the identity matrix times a small scalar), then the determinant of M_2 will be tiny, but the matrix is not hard to handle.

A better measure of near-singularity to consider how the matrix maps the unit sphere. The image of the unit sphere will be an ellipsoid, and the more "eccentric" this ellipsoid (closer to flat in one or more dimension), the closer the matrix is to being singular. This can actually be quantified, for example, by the singular value decomposition. The ratio of the largest singular value to the smallest gives a measure of this eccentricity. Higher = more singular.
 


Hi jbunniii. Thanks for reply. I don't understand the second paragraph. Can you more explain please this paragraph or introduce some book about this subject?
What's the meaning of the sentence "the matrix maps the unit sphere" ,please?
 


"Other problems" would be things like differential equation where the coefficients are wildly different.

jbunniii, that depends upon what you mean by "small". If you have the identity matrix times a number small enough that your computer would not have sufficient CPU space to contain its reciprocal, you are going to have difficulty with it.
 


HallsofIvy said:
jbunniii, that depends upon what you mean by "small". If you have the identity matrix times a number small enough that your computer would not have sufficient CPU space to contain its reciprocal, you are going to have difficulty with it.
Yes, that's true, but the determinant is not a great measure of this.

I don't think any computer would have a problem inverting 0.1I_n (one tenth times the n \times n identity matrix), and the problem does not become more numerically difficult as n increases. On the other hand, det(0.1I_n) = 0.1^n so the determinant becomes arbitrarily small as you increase n.
 
Last edited:


sunny110 said:
Hi jbunniii. Thanks for reply. I don't understand the second paragraph. Can you more explain please this paragraph or introduce some book about this subject?
What's the meaning of the sentence "the matrix maps the unit sphere" ,please?
Yes, think of an m \times n matrix as a linear mapping from \mathbb{R}^n to \mathbb{R}^m. This mapping is fully characterized by what it does to the unit sphere S = \{x \in \mathbb{R}^n : ||x|| = 1\}. The image of this sphere under any linear mapping is an ellipsoid. This ellipsoid may be "flat" in some dimensions if the matrix does not have full rank. And it may be "almost flat" in some dimensions if the matrix is numerically close to not having full rank.

The singular value decomposition breaks the matrix down into three components: an orthogonal rotation, followed by a stretch or shrink factor on each of the canonical axes, followed by another orthogonal rotation. The middle component is a diagonal matrix, consisting of the stretch/shrink factors (called the singular values), which can be used to identify how close the matrix comes to flattening one or more dimension. The ratio of the largest to smallest singular value is a good way to quantify this.

You can look this up for more details on Wikipedia. Also, many books on numerical linear algebra cover this, for example the first few sections of Trefethen and Bau's Numerical Linear Algebra.
 
Last edited:
  • #10


Thank you very much, HallsofIvy and JBunniii.
But what is difference between a "badly scaled Matrix" and a "ill-conditioned Matrix"?

Please see this page ("books.google.com/books?id=8hrDV5EbrEsC&pg=PA55" )
 
Last edited by a moderator:
  • #11


sunny110 said:
Thank you very much, HallsofIvy and JBunniii.
But what is difference between a "badly scaled Matrix" and a "ill-conditioned Matrix"?

Please see this page ("books.google.com/books?id=8hrDV5EbrEsC&pg=PA55" )

Hmm, if I'm understanding what the author is saying, I think he means that if you choose units (scaling) unwisely, you may end up with an ill-conditioned matrix.

For example, if I had a matrix where the first row contained distances and the second row contained times, the matrix might be numerically difficult to handle if I make an unwise choice of units. Perhaps the numbers are fine if I use kilometers for distance and hours for time, but if I instead chose nanometers for distance and years for time, I might end up with an ill conditioned matrix, because it would contain some huge numbers and some tiny ones.

So, whether a matrix is ill conditioned or not depends on the numerical values appearing in the matrix. And one reason a matrix might be ill conditioned is because it is badly scaled, as in the example above.
 
Last edited by a moderator:
  • #12


I think for Matrices, this two terms are equivalent. Is this right?
But in this book we read, "It is also all too easy to turn a badly scaled problem into a genuinely ill-conditioned problem." I have reached a contradiction.
 
Last edited:
  • #13


sunny110 said:
I think for Matrices, this two terms are equivalent. Is this right?
But in this book we read, "It is also all too easy to turn a badly scaled problem into a genuinely ill-conditioned problem." I have reached a contradiction.

If my interpretation above is correct, then bad scaling is one possible cause of ill conditioning, but not the only possible cause. A matrix can be ill conditioned even if its units were chosen sensibly, and for that matter, not all matrices even have units associated with their data.
 
  • #14


jbunniii said:
If my interpretation above is correct, then bad scaling is one possible cause of ill conditioning, but not the only possible cause.

I agree. A matrix like ##\begin{bmatrix} 10^{100} & 0 \\ 0 & 10^{-100} \end{bmatrix}## might be called "badly scaled", but it''s unlikely to cause any numerical problems. On the other hand a matrix like ##\begin{bmatrix} 1 & 1.0000000001 \\ 1.0000000001 & 1 \end{bmatrix}## is not badly scaled, but it is ill-condutioned.
 
  • #15


AlephZero said:
[...]numerical problems[...]

A little off topic, but What numerical problems may occur? Can you name some of these problems please?
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
1
Views
19K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
5K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 62 ·
3
Replies
62
Views
12K