Hello,

Scilab help states that If a matrix is badly scaled or nearly singular, a warning message will be displayed:

"matrix is close to singular or badly scaled." (http://help.scilab.org/docs/5.3.3/en_US/inv.html)

What do these terms mean? "well scaled" , "badly scaled" , "nearly singular"

Can anyone please give explanation with example to me?

Related Linear and Abstract Algebra News on Phys.org
HallsofIvy
Homework Helper

"Nearly singular" means that the determinant is very near to 0. Just as trying to "divide" by a matrix whose determinant is 0 would be equivalent to "dividing by 0" giving "infinite" answers, so trying to "divide" by a nearly singular matrix will give extremely large answers, perhaps larger than the software can handle, but even if not, causing round off errors that "swamp" other values and give incorrect results.

A matrix, or other problem, is "badly scaled" when some numbers in the problem are so much larger than the other that they cannot be kept in memory to the same accuracy, causing some information to be lost.

Thank you professor. You explain it very well. I have a little question:

What is your meaning of "other problem"?

jbunniii
Homework Helper
Gold Member

Matrices with small determinants are not always problematic. For example, if $M_1 = I$ (the identity matrix) and $M_2 = 0.001 I$ (the identity matrix times a small scalar), then the determinant of $M_2$ will be tiny, but the matrix is not hard to handle.

A better measure of near-singularity to consider how the matrix maps the unit sphere. The image of the unit sphere will be an ellipsoid, and the more "eccentric" this ellipsoid (closer to flat in one or more dimension), the closer the matrix is to being singular. This can actually be quantified, for example, by the singular value decomposition. The ratio of the largest singular value to the smallest gives a measure of this eccentricity. Higher = more singular.

What's the meaning of the sentence "the matrix maps the unit sphere" ,please?

HallsofIvy
Homework Helper

"Other problems" would be things like differential equation where the coefficients are wildly different.

jbunniii, that depends upon what you mean by "small". If you have the identity matrix times a number small enough that your computer would not have sufficient CPU space to contain its reciprocal, you are going to have difficulty with it.

jbunniii
Homework Helper
Gold Member

jbunniii, that depends upon what you mean by "small". If you have the identity matrix times a number small enough that your computer would not have sufficient CPU space to contain its reciprocal, you are going to have difficulty with it.
Yes, that's true, but the determinant is not a great measure of this.

I don't think any computer would have a problem inverting $0.1I_n$ (one tenth times the $n \times n$ identity matrix), and the problem does not become more numerically difficult as $n$ increases. On the other hand, $det(0.1I_n) = 0.1^n$ so the determinant becomes arbitrarily small as you increase $n$.

Last edited:
jbunniii
Homework Helper
Gold Member

What's the meaning of the sentence "the matrix maps the unit sphere" ,please?
Yes, think of an $m \times n$ matrix as a linear mapping from $\mathbb{R}^n$ to $\mathbb{R}^m$. This mapping is fully characterized by what it does to the unit sphere $S = \{x \in \mathbb{R}^n : ||x|| = 1\}$. The image of this sphere under any linear mapping is an ellipsoid. This ellipsoid may be "flat" in some dimensions if the matrix does not have full rank. And it may be "almost flat" in some dimensions if the matrix is numerically close to not having full rank.

The singular value decomposition breaks the matrix down into three components: an orthogonal rotation, followed by a stretch or shrink factor on each of the canonical axes, followed by another orthogonal rotation. The middle component is a diagonal matrix, consisting of the stretch/shrink factors (called the singular values), which can be used to identify how close the matrix comes to flattening one or more dimension. The ratio of the largest to smallest singular value is a good way to quantify this.

You can look this up for more details on Wikipedia. Also, many books on numerical linear algebra cover this, for example the first few sections of Trefethen and Bau's Numerical Linear Algebra.

Last edited:

Thank you very much, HallsofIvy and JBunniii.
But what is difference between a "badly scaled Matrix" and a "ill-conditioned Matrix"?

Last edited by a moderator:
jbunniii
Homework Helper
Gold Member

Thank you very much, HallsofIvy and JBunniii.
But what is difference between a "badly scaled Matrix" and a "ill-conditioned Matrix"?

Hmm, if I'm understanding what the author is saying, I think he means that if you choose units (scaling) unwisely, you may end up with an ill-conditioned matrix.

For example, if I had a matrix where the first row contained distances and the second row contained times, the matrix might be numerically difficult to handle if I make an unwise choice of units. Perhaps the numbers are fine if I use kilometers for distance and hours for time, but if I instead chose nanometers for distance and years for time, I might end up with an ill conditioned matrix, because it would contain some huge numbers and some tiny ones.

So, whether a matrix is ill conditioned or not depends on the numerical values appearing in the matrix. And one reason a matrix might be ill conditioned is because it is badly scaled, as in the example above.

Last edited by a moderator:

I think for Matrices, this two terms are equivalent. Is this right?
But in this book we read, "It is also all too easy to turn a badly scaled problem into a genuinely ill-conditioned problem." I have reached a contradiction.

Last edited:
jbunniii
Homework Helper
Gold Member

I think for Matrices, this two terms are equivalent. Is this right?
But in this book we read, "It is also all too easy to turn a badly scaled problem into a genuinely ill-conditioned problem." I have reached a contradiction.
If my interpretation above is correct, then bad scaling is one possible cause of ill conditioning, but not the only possible cause. A matrix can be ill conditioned even if its units were chosen sensibly, and for that matter, not all matrices even have units associated with their data.

AlephZero
Homework Helper

If my interpretation above is correct, then bad scaling is one possible cause of ill conditioning, but not the only possible cause.
I agree. A matrix like ##\begin{bmatrix} 10^{100} & 0 \\ 0 & 10^{-100} \end{bmatrix}## might be called "badly scaled", but it''s unlikely to cause any numerical problems. On the other hand a matrix like ##\begin{bmatrix} 1 & 1.0000000001 \\ 1.0000000001 & 1 \end{bmatrix}## is not badly scaled, but it is ill-condutioned.

[...]numerical problems[...]
A little off topic, but What numerical problems may occur? Can you name some of these problems please?