Why Was the Determinant Defined?

  • Thread starter Thread starter Tosh5457
  • Start date Start date
  • Tags Tags
    Determinant
Tosh5457
Messages
130
Reaction score
28
Hi, I know how to calculate matrices determinants but I never figured out why they're so useful in many problems, like in integral variable substitution in calculus or to find eigenvalues.

I don't have an intuitive idea of what a determinant is... I doubt it appeared in algebra just by coincidence, what was the reason for defining the determinant? And why does it apply to various problems?

Thanks :smile:
 
Physics news on Phys.org
Hi Tosh5457! :smile:
Tosh5457 said:
I don't have an intuitive idea of what a determinant is...

Basically, it's the factor by which the volume (or area) changes.

So that's why you need a determinant when you change from dxdydz to something more exotic, and that's why zero determinant means that the volume becomes zero, so your solution subspace is "flattened". :wink:
 
Check wikipedia: http://en.wikipedia.org/wiki/Determinant

Historically, determinants were considered without reference to matrices: originally, a determinant was defined as a property of a system of linear equations. The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero). In this sense, determinants were first used in the Chinese mathematics textbook The Nine Chapters on the Mathematical Art (九章算術, Chinese scholars, around the 3rd century BC). In Europe, two-by-two determinants were considered by Cardano at the end of the 16th century and larger ones by Leibniz.

In Europe, Cramer (1750) added to the theory, treating the subject in relation to sets of equations. The recurrent law was first announced by Bézout (1764).

It was Vandermonde (1771) who first recognized determinants as independent functions. Laplace (1772) gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order. Lagrange was the first to apply determinants to questions of elimination theory; he proved many special cases of general identities.

Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word determinants (Laplace had used resultant), though not in the present signification, but rather as applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem.

The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of m columns and n rows, which for the special case of m = n reduces to the multiplication theorem. On the same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See Cauchy-Binet formula.) In this he used the word determinant in its present sense, summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's. With him begins the theory in its generality.

The next important figure was Jacobi (from 1827). He early used the functional determinant which Sylvester later called the Jacobian, and in his memoirs in Crelle for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work.

The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the text-books on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.

Axler in 1995 attacked determinant's place in Linear Algebra. He saw it as something to be derived from the core principles of Linear Algebra, not to be used to derive the core principles.

So apparently, determinants first appeared when solving linear equations. For example, it's not hard to check that the system

<br /> \left\{\begin{array}{c}ax+by=0\\ cx+dy=0\end{array}\right.

has exactly one solution if and only if ad-bc=0. More fancy stuff can be done for 3x3-systems, and these values were the determinants. But the value of a determinant for an nxn-system and the link with systems of equations was not found until recently (that is, around 300 years ago).
 
P.S. it's better to give a link than to copy large swaths of text wholesale. (if you're worried about edits, you can even use the "permanent link" from wikipedia's toolbar to get a link to that specific revision of the page) (fortunately, I think Wikipedia does permit this usage of its content, so long as it is properly attributed, e.g. the link you gave)
 
Ah, sorry :frown:
 
So it appeared to resolve linear equations systems... Funny how something that simple explains a lot to me, thanks!
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top