Why the hate on determinants?

In summary: It is also necessary to explain to schoolchildren that the volume of a pyramid is one third the product of its base and height. Then they will understand that the volume of an n-dimensional parallelepiped is the product of its base (the area of the (n-1)-dimensional parallelepiped spanned on any n-1 vectors) and its height, which is the length of a perpendicular drawn from the origin to the hyperplane containing the base."In summary, there is a misconception that determinants are no longer useful in mathematics, as some books have stated. However, this is not true as determinants have important applications in various fields such as solving linear equations, finding
  • #1
Buffu
849
146
Why do most books on linear algebra have something like "Determinants are useless now".I have seen this in Strang, Friedberg and Axler's book.

Are determinants of no use in Maths ? which tool has taken its place in algebra ? And why this happened ?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
Buffu said:
Whty does most books on linear algebra have something like "Determinants are useless now".I have seen this in Strang, Friedberg and Axler's book.
Hard to tell without the context. And very likely not really a good idea to say something like this.
Is determinants of no use in Maths ? which tool has taken its place in algebra ? And why this happened ?
Au contraire! The opposite is true. Determinants are very important in various fields: solving linear equations, shortest way to say and test a matrix is regular, field theory, geometry, theory of algebraic groups, multivariate calculus, and more if I would search for it.
 
  • Like
Likes mathwonk, Buffu, FactChecker and 1 other person
  • #3
I have learned something new. I did not know that books were teaching the uselessness of determinants. I don't know why they would say that.
Are they saying that it is useless to learn the steps involved in calculating the determinant, or that the determinant has no use. One use of a determinant, is to tell if a set of equations is independent, or not. (check to see if determinant equals zero).

If the determinant is not equal to zero, then you can use Cramer's method to solve the system. This involves taking more than one determinant. As Excel and other spreadsheets have a MDTERM() function to calculate determinants, you can reduce the process of solving a system to keying some coefficients into a spreadsheet.
Wtih larger matrices, I would not recommend trying to tackle a determinant by hand, you should know how to go about doing it.

Here is how Cramer's method works http://www.purplemath.com/modules/cramers.htm
 
  • Like
Likes mathwonk and Buffu
  • #4
fresh_42 said:
Au contraire! The opposite is true. Determinants are very important in various fields: solving linear equations, shortest way to say and test a matrix is regular, field theory, geometry, theory of algebraic groups, multivariate calculus, and more if I would search for it.
Another example use is in finding the eigenvalues of a matrix.

Again, the context of why Strang et al. said that would be helpful.
 
  • #6
Determinants have a good interpretation as the effect of the matrix on an oriented n-dimensional volume. Saying that determinants are useless is like saying that areas and volumes are useless. I wouldn't give it too much thought. A person would be making a great mistake to omit learning about determinants.
 
Last edited:
  • Like
Likes atyy, WWGD, Buffu and 3 others
  • #7
i advise you to ignore these comments and learn determinants as well as possible. the most fundamental invariant of any linear operator is its characteristic polynomial, and the determinant is the constant term of this polynomial. nobody can change this fact of nature. even those who rail against determinants know them very well, and are just (in my opinion) showing off how clever they are at thinking of an alternative approach to some aspects of the theory. Determinants measure volume; who would argue that volume is not important? take a look at the change of variables formula for integrals in several variables. the only troublesome aspect of determinants is that they are hard to compute, and hard to define. so what? they are a basic fact of life in mathematics. they underlie the super important tool of differential forms and "wedge product". they define the beautiful plucker embedding of the space of lines in projective 3 space to the quadric hypersurface in P^5. there may be clever ways to get around them in certain settings, but anyone who does not know them pretty well is ignorant of something essential.

i see now that fact checker said it better and shorter.
 
  • Like
Likes scottdave, Buffu, ConfusedMonkey and 1 other person
  • #8
Buffu said:
Why do most books on linear algebra have something like "Determinants are useless now".I have seen this in Strang, Friedberg and Axler's book.

Are determinants of no use in Maths ? which tool has taken its place in algebra ? And why this happened ?

I assume you mean Dr. Gilbert Strang ? Where did you see this in Dr. Strang's book ? I can't speak for Friedberg or Axler but, I have reviewed Dr. Strang's lectures and have also read his book which covers determinants in depth and nowhere does he imply that determinants are useless and/or are no longer needed. Can you cite a particular lecture or a writing from his book where he says that determinants are useless ? I see nothing of the kind.
 
  • Like
Likes Buffu
  • #9
scottdave said:
I have not read this book, so I was curious about it. I did some searching and came across some opinions on Quora that you may find interesting. https://www.quora.com/What-do-mathematicians-think-of-Axlers-Linear-Algebra-Done-Right
Axler wrote a paper about banning deteminants, available on the net:
http://www.axler.net/DwD.pdf
It's interesting and he has some good points. But the main weakness is that he just proves theorems. He doesn't say a word about how to calculate eigenvalues without determinants. It is not at all clear how to do this, and I doubt that it is even possible to do in any feasible way.
 
  • Like
Likes scottdave, Buffu and Demystifier
  • #10
Erland said:
. He doesn't say a word about how to calculate eigenvalues without determinants. It is not at all clear how to do this, and I doubt that it is even possible to do in any feasible way.

But no one uses determinants to compute eigenvalues for any matrix of size >=5. Look up the QR method.
 
  • Like
Likes suremarc and Demystifier
  • #11
Volume is useless now? http://mathinsight.org/relationship_determinants_area_volume

https://math.oregonstate.edu/home/p...ulusQuestStudyGuides/vcalc/change/change.html (Jacobian is a determinant)

https://en.wikipedia.org/wiki/Slater_determinant

https://arxiv.org/abs/0812.2691 (Pfaffian is a determinant)

https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html
On teaching mathematics by V.I. Arnold
"The determinant of a matrix is an (oriented) volume of the parallelepiped whose edges are its columns. If the students are told this secret (which is carefully hidden in the purified algebraic education), then the whole theory of determinants becomes a clear chapter of the theory of poly-linear forms. If determinants are defined otherwise, then any sensible person will forever hate all the determinants, Jacobians and the implicit function theorem."
 
  • Like
Likes mathwonk, scottdave and Buffu
  • #13
The debate on determinants is not whether they should be taught, but when in the sequence of instruction they should be taught.

For example, if students are drilled in solving systems of equations by determinants, the can develop a mental resistance to learning about the concepts of linear independence, Gaussian elimination, elementary matrices etc.

It is important to use determinants to define theoretical concepts like characteristic polynomials. But it would be misconception to think that numerical computations involving large matrices implement the theoretical concepts by computing the value of a determinant exactly as the definition of a determinant specifies. So numerical analysis texts emphasize that the definition of a determinant is not the practical way to compute determinants of large matrices -and that using determinants is not usually the practical way to solve a system involving a large number of linear equations. I wouldn't call those practical statements a "hate" for determinants.
 
  • Like
Likes mathwonk, Cryo, lavinia and 4 others
  • #14
This is not a new thing, by the way.

In the 1960's a prof at U Texas did an hour long video called "Who Killed Determinants?". I couldn't find a free copy of the video on Youtube, though, so I may never know.

http://search.library.utoronto.ca/details?214517
 
  • Like
Likes Buffu
  • #15
Axler seems to imply that there is definitely a need and a time and place for determinants. However, certain proofs, ideas, and computations can be accomplished in more elegant ways without determinants than with. I tend to agree with this philosophy. However, determinants are by no means useless nor are they obsolete. Indeed they do have a well deserved place in this most fascinating thing we call Mathematics.
 
  • Like
Likes Buffu
  • #16
From a theoretic point of view, determinants are very useful, since it translates all information of a matrix to one scalar, from which one can say that an inverse exists, a system of equations is independent, etc. The determinant not only occurs in linear algebra, but also in other fields, like analysis (calculus). An example that comes to mind is the Jacobian determinant, which is used as scaling factor in coordinate transformations when calculating complicated integrals in more than 1 variable. Another analysis determinant is the Wronskian, which is for example used in the theory of differential equations (to see whether a system of differential equations has a unique solution).

The determinant is also a useful tool to memorise how to compute the cross product of 2 vectors, or the curl of a vector field (as an application of the cross product).

I would not bother when an author says something like this. Just ignore it and move on.
 
  • Like
Likes mathwonk and Buffu
  • #17
here is another example of where the concept of a determinant is absolutely fundamental. This is the context of wedge products. Briefly, there is a construction that changes an n dimensional space into a one dimensional space, whose elements are roughly parallelepipeds with n sides, two such being equivalent if they have the same oriented volume. Then a linear map between two n dimensional spaces induces a linear map between the corresponding two one dimensional spaces, which must be multiplication by a scalar. that scalar is the determinant of the original transformation, and measures the change in volume induced by the original transformation. By analogy one may call the induced one dimensional space the determinant, or more preciely the nth wedge product, of the original n dimensional space. In geometry the most basic construction on a smooth n-manifold is its tangent bundle, a family of n dimensional spaces. The induced determinant bundle, the nth wedge product of the dual of the tangent bundle, or the bundle of n-forms, is a family of one dimensional spaces, called the canonical line bundle of the manifold, and is the most fundamental invariant of the manifold. The zero locus of any section of this bundle is called a canonical class of the manifold. To illustrate its importance, the degree of this class on a surface of genus g is 2g-2, so it determines the genus. So the idea of a determinant is so basic that it persists throughout geometry, and I believe the best place to begin to learn about it is at the elementary level. I.e. if you know about elementary determinants you have some chance of grasping their deeper generalizations.

Here is a wikipedia article on the canonical bundle, using the term "determinant" in this way:

https://en.wikipedia.org/wiki/Canonical_bundle

here is a nice elementary book on geometry of forms by David Bachman.

https://www.amazon.com/dp/0817683038/?tag=pfamazon01-20
 
Last edited:
  • Like
Likes Skins and Buffu
  • #18
As an example of the usefulness of the canonical bundle in making calculations, one can compute the genus of a projective algebraic plane curve, say over the complex numbers. Then the genus is actually the topological genus of the real surface underlying the complex curve. In general the (degree of the) canonical class on a curve of genus g equals 2g-2, i.e. the negative of the euler characteristic.

By computing an actual 2 form one sees that the canonical class of the plane itself is O(-3), i.e. a standard 2 form has a triple pole along a line, and no zeroes. Then there is a wonderful formula for how canonical divisors restrict, the adjunction formula. This says that for a curve of degree d in the plane, the canonical class on the curve equals O(d-3).

Hence for a line we get -2, agreeing with the fact that the 1-form dz has no zeroes in the affine plane and one double pole at infinity, under the substitution z = 1/w. For a plane conic we get O(-1) but this means the divisor is cut out on a conic by intersecting with a curve of degree 1, i.e. a line. Since a line meets a conic twice we again get O(-2) on the conic itself, which agrees with the fact that a conic is isomorphic via projection with a line, and both are homeomorphic to the sphere, with genus zero. On a plane cubic we get O(0), or the trivial class, agreeing with the fact that a smooth cubic is homeomorphic to a torus of genus one, and euler characteristic zero. As exercise check that a (smooth) plane curve of degree4 has genus 3.

We can also compute the genus of a curve cut by two surfaces in P^3, complex projective 3 space, using the higher adjunction formula, that the curve cut by surfaces of degree d and e, has canonical class of degree equal to: de(d+e-4). Thus two quadratic surfaces cut a curve of degree 4 and class zero, hence again a torus. Indeed projecting such a curve to the plane from one point of itself lowers the degree by one and gives an isomorphism with a plane cubic.

Intersecting two surfaces of degrees 2 and 3 gives (exercise) a curve of genus 4.

If we pass two cubic surfaces through a quartic space curve C of genus g = one, the full intersection has degree 9 hence consists of that quartic plus some other curve C’ of degree 5. We can even compute the genus of that residual curve C’ by the formula (obtained by subtracting two adjunction formulas):

2(g’-g) = (5-4)(3+3-4) = 2, so the other curve C’ has genus g’ = g+1 = 2, and is hence a space quintic of genus 2. Note that projecting this curve to the plane from a point of itself gives a plane quartic which should have genus 3. Since it has only genus 2, the plane projection must cross itself once, lowering the genus. This means that every point of the space quintic, that we might choose to project from, must lie on a trisecant, something not obvious to me at the moment. It also suggests that the canonical class on C' of degree 2g'-2 = 2, is swept out residually on the space quintic by the pencil of planes through such a trisecant. I.e. each such plane cuts the quintic in 3 fixed points on the trisecant, and further in two moving points. I don't see why this follows from adjunction either at the moment, but it is true for general reasons.

These calculations are all manifestations of the concept of differential n - form, i.e. the "determinant" of the space of one forms dual to tangent vectors. These computations were known to European geometers close to 150 years ago, but seem simpler today through the systematic use of the canonical bundle of n - forms. In particular I found this last "residual genus" formula mysteriously presented in a wonderful old book by Semple and Roth on classical algebraic geometry, and was able to derive it myself (just yesterday) this way.
 
Last edited:
  • Like
Likes Skins and Buffu
  • #19
They may be very useful. But any high-school level textbook in India has the most tedious calculations in this chapter including finding inverse matrices by hand, so the 'hatred' for it
 
  • #20
I took the linear algebra MOOC course (LAFF - Linear Algebra, Foundations to Frontiers through edX and University of Texas) in Fall of 2018.
I believe they talked briefly about determinants. I believe one use of them is in computing eigenvalues. Another is Cramers method in solving a linear system, and as an extension of that, to determine if all of the equations in a linear system are independent.

The determinants can be quite costly (computation wise), especially for large systems. So they showed other methods to achieve the desired results.
 
  • #21
Buffu said:
Why do most books on linear algebra have something like "Determinants are useless now".I have seen this in Strang, Friedberg and Axler's book.

Are determinants of no use in Maths ? which tool has taken its place in algebra ? And why this happened ?

I don't know what that quote means, but maybe they're talking about computer algorithms that accomplish many of the goals of matrix analysis (computing eigenvalues, inverses, etc.) without first computing the determinant?
 
  • #22
stevendaryl said:
I don't know what that quote means, but maybe they're talking about computer algorithms that accomplish many of the goals of matrix analysis (computing eigenvalues, inverses, etc.) without first computing the determinant?
Maybe it has to see with the fact that running time grows fast with n for finding determinants, so other techniques are used for large matrices?
 
  • #23
I like Axler's book very much. I like the emphasis on vector space and linear independence. Never actually reached the chapter on determinants in his book :-)

Here is how you define an eigenvalue without determinants. Given an operator (matrix) ##\mathbf{T}## the eigenvalue ##\lambda## is such scalar that the following kernel (null-space) is not empty:

##Ker\left(\mathbf{T}-\lambda \mathbf{Id}\right)\neq\emptyset##

where ##\mathbf{Id}## is identity. The eigenvectors are those vectors that span this non-empty kernel. The nice thing about this definition is that it is easy to extend to generalized eigenvectors.

I think Axler's point was that determinants are like black-boxes. It is hard to understand why they are so useful. Also, as some other people already commented. Computing determinants for large matricies may be not a good way to solve common problems in linear algebra.
 
  • #24
Do people hate on addition, given that the task of adding up a million numbers is hard by hand? 2x2 determinants are not so hard, just as adding two digit numbers is fairly easy. some of us even think basic subtraction is hard (have you balanced your checkbook lately?), but we don't dismiss it. the point is, as a scientist, you may not ignore any aspect of nature, whether or not it seems easy to deal with.
 
  • Like
Likes Mark44, lavinia, member 587159 and 1 other person
  • #25
Erland said:
Axler wrote a paper about banning deteminants, available on the net:
http://www.axler.net/DwD.pdf
It's interesting and he has some good points. But the main weakness is that he just proves theorems. He doesn't say a word about how to calculate eigenvalues without determinants. It is not at all clear how to do this, and I doubt that it is even possible to do in any feasible way.

I read some entries by Prof. Robert Israel some years ago in a computer-algebra group. He said that in practice (at least for totally numerical examples) eigenvalues are typically not found by finding roots of the characteristic polynomial, but rather, by other methods. He went on to say that some posted methods for finding polynomial roots proceed by converting the polynomial into the characteristic equation of some matrix, then using other eigenvalue algorithms to find the roots.
 
  • Like
Likes member 587159
  • #26
Just my 2 cents.
First, about importance of the determinat:
  1. Determinants are extremely important in mathematics. For a real matrix, the determinant is the oriented volume of its colums (rows); I do not think anybody would deny importance of volume and orientation. Because the determinant is the volume, it appears in the change of variable formula in multiple integrals. Rigorous definition of the orientation is usually not presented in the standard multivariable calculus, but you will need it differential geometry/analysis on manifold: and there is no way to avoid determinants there.
  2. In linear algebra itself the Cramer rule and cofactor formula for inverse have a tremendous theoretical value. For example, if you have a polynomial matrix (i.e. a matrix whose entries are polynomials) with determinant 1, cofactor formula for inverse tell you that the invest matrix is also a polynomial one.
  3. I often use the cofactor formula for inverse to invert 2x2 matrices
Now abou where the determinants should not be used:
  1. Using Cramer's rule for solving a system of linear equations is not "practical" in dimensions >2 (more computations than the standard row reduction). This is especially true if one uses the cofactor expansion or the formal definition to compute the determinant: in this case your computer will probably choke somewhere between dimnsions 10 and 20.
  2. Of course, one can compute the determinant using row reduction, so the computational cost of using Cramer's rule in this case will be comparable with the cost of the standard methods (but still higher).
  3. Using characteristic polynomial ##\det (A -\lambda I)## to find eigenvalues is usually not practical in dimensions >2. While some textbooks have exercises with 3x3, 4x4, and maybe even with 5x5 matrices, these are the han-picked examples, where the characteristic polynomial can be easily factored.
  4. For numerical computation of eigenvalues, different sorts of iterative methods are usually used; applied in a correct situation such methods are quite effective.
Why people (including Axler) complain about using determinants in the spectral theory (finding eigenvalues and eigenvectors)
  1. Very often, after taking a traditional linear algebra class, students have an idea that the eigenvalues are defined as the roots of characteristic polynomials. They then will try to apply this idea to situations when the characteristic polynomial does not exist (linear transformations on a infinite-dimensional space, etc).
  2. Determinants do not usually exist for infinite matrices (operators in infinite-dimensional spaces); they exist for some special classes of operators only. So, if one wants to extend a result to an infinite-dimensional situation, the determinants should be avoided.
  3. Sometimes it is much easier to find eigenvalues using the definition, than going characteristic polynomial route. For example, if one has a transformation ##A\mapsto A^T## in the space of ##n\times n## matrices, then the eigenvalues and eigenvectors (eigenspaces) are pretty easy to find using the definitions and basic facts about eigenvalues/eigenvectors. Theoretically, it is possible to go the characteristic polynomial route, but you will need to use ##n^2\times n^2## matrices. Just for fun I worked out ##n=2## case, and it is more work than using the definition for general n.
Should the determinants be used/taught in a standard linear algebra class? My answer is "yes"!
  1. Determinants are very important in other areas of mathematics (volume, orientation in differential geometry, for example)
  2. Determinants are a tool of a great theoretical value in linear algebra
  3. For finding eigenvalues, the characteristic polynomials provides a quick way to work with "toy" 2x2 problems. Method proposed by Axler requires an extra steps of finding vectors ##A^k \mathbf v## and solving a system of linear equations. Sometimes you will get lucky, and the polynomial you get using Axler's method will be of lower degree than the characteristic polynomial. But very often you will get just a multiple of the characteristic polynomial. In particular, in 2x2 case, using Axler's method you will get a polynonomial of degree <2 if and only of you pick your vector ##\mathbf v## to be an eigenvector.
  4. In more advanced linear algebra courses, introducing characteristic polynomials provides a lot of interesting connections. For example the equality of the multiplicity of a root of the characteristic polynomial (a purely algebraic construct) and the dimension of the corresponding generalized eigenspace.
Finally, it should be taught (and seriously emphasized) that determinants/characteristic polynomials are not used (and should not be used) in numerical calculations (solving system of linear equations/finding eigenvalues) in dimensions >2 or >3
 
  • Like
Likes member 587159 and FactChecker
  • #27
When I took a numerical-methods class long ago, its recommended method for solving a system of linear equations was LU decomposition. That and Gaussian elimination have runtime O(n^3) for a size-n matrix.

For calculating a determinant, an efficient method is Gaussian elimination, at O(n^3). So if one uses Cramer's rule, that's O(n^4) for a system of linear equations, and O(n^5) for a matrix inversion. So Cramer's rule isn't very good in practice. But it is good in theory, for indicating whether a matrix can be inverted.

I recently wanted to find determinants of integer matrices, and to do so without (1) using floating-point arithmetic or (2) going through permutations of element columns with rows fixed, a O(n!) method. I thought of a form of Gaussian elimination with pivoting, one inspired by Euclid's algorithm for GCD's. I pivoted to the smallest nonzero element in a column, then subtracted multiples of its row from all the other rows. I continued until only one column element was nonzero.
 

Why do some people hate determinants?

Some people believe that determinants are too abstract and difficult to understand, leading to frustration and dislike.

What is the purpose of determinants in mathematics?

Determinants are used to solve systems of linear equations and to calculate the area, volume, and other properties of geometric shapes.

Why are determinants important in linear algebra?

Determinants play a crucial role in determining whether a system of linear equations has a unique solution, infinite solutions, or no solution at all.

Can determinants be negative?

Yes, determinants can be negative, as they represent a signed volume or area in geometric calculations.

Are there any real-life applications of determinants?

Yes, determinants are used in various fields such as physics, economics, and computer science to model and solve real-world problems.

Similar threads

  • Science and Math Textbooks
Replies
17
Views
1K
Replies
4
Views
2K
  • Science and Math Textbooks
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Science and Math Textbooks
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
11
Views
1K
Replies
12
Views
3K
  • STEM Academic Advising
Replies
16
Views
380
Replies
3
Views
1K
Replies
8
Views
2K
Back
Top