Determinant is independent of row/column

Click For Summary
The discussion focuses on the proof that the value of a determinant calculated using the Laplace expansion is independent of the row or column chosen for the expansion. A user expresses difficulty in understanding the proof provided on Wikipedia and seeks a clearer, simpler version. Another participant suggests that proving the statement oneself may be more straightforward than following existing proofs. The proof involves using induction and detailed calculations of minors and cofactors, ultimately demonstrating that the coefficients of the terms in the expansions are equal regardless of the row or column selected. The conversation highlights the technical nature of the proof while encouraging self-discovery in understanding the concept.
Bipolarity
Messages
773
Reaction score
2
I am curious about the proof of the fact that the value of a determinant computed using the Laplace (or cofactor) expansion is independent of along which row (or column) the expansion is performed.

Is this a very difficult proof? My textbook omits it entirely. I was curious if someone could provide a link to the proof, as I am interested in reading it. Wikipedia has a proof http://en.wikipedia.org/wiki/Laplace_expansion but it was too complicated for me to understand.

Does anyone know a simpler form of the proof i.e. one that is longer but clearer in its statements for a less insightful reader?

BiP
 
Physics news on Phys.org
By the nature of the Laplace expansion, a proof is necessarily going to be ugly and technical.

HINT: it will be much easier to prove this yourself than to follow this proof.

Let me use the same notations as in wikipedia. So take a matrix B. Let me show that expansion along the first row yields the same result as expansion along the second row. The more general statement is left to you. We prove this by induction. For the 1x1 case, the statement is trivial.

So assume that B is nxn. Expansion along the first row yields
b_{1,1}C_{1,1}+...+b_{1,n}C_{1,n}=b_{1,1}M_{1,1}-b_{1,2}M_{1,2}+...+(-1)^{n+1}b_{1,n}M_{1,n}

Expansion along the second row yields
b_{2,1}C_{2,1}+...+b_{2,n}C_{2,n}=-b_{2,1}M_{2,1}+b_{2,2}M_{2,2}+...+(-1)^{n+2}b_{2,n}M_{2,n}

We wish to calculate M_{1,1}. By definition this is the determinant of the matrix that results if we remove the first row and the first column from B. By induction hypothesis, we can calculate this determinant by taking the Laplace expansion along the first row. So we can write
M_{1,1}=b_{2,2}D_{1,2}^{1,2} - b_{3,2}D_{1,2}^{1,3}+...+(-1)^{2+n}D_{1,2}^{1,n}
where D_{a,b}^{c,d} is the determinant of the matrix resulting from B if we remove row a and b, and if we remove column c and d.
In general:
M_{1,k}=(-1)^{\delta(1,k)} b_{2,1}D_{1,2}^{1,k} +(-1)^{\delta(2,k)} b_{2,2}D_{1,2}^{2,k}+... + (-1)^{\delta(n,k)}b_{2,n}D_{1,2}^{n,k}

We used the following notations: D_{1,2}^{k,k}=0 and \delta(l,k) is the number of elements in \{1,...,l-1\}\setminus\{k\}.

To calculate M_{2,k}, we calculate this matrix by taking the Laplace expansion along the first row. We get
M_{2,k}=(-1)^{\delta(1,k)} b_{1,1}D_{1,2}^{1,k} + (-1)^{\delta(1,k)}b_{1,2}D_{1,2}^{2,k}+...+(-1)^{\delta(n,k)}b_{1,n}D_{1,2}^{n,k}

We substitute these values of M_{1,k} and M_{2,k} into the original sum.

By definition we know that D_{1,2}^{j,k}=D_{1,2}^{k,j}. We wish to prove that the coefficients of these terms are equal.
The coefficient of D_{1,2}^{j,k} in the first sum is:
(-1)^{k+1}b_{1,k}(-1)^{\delta(j,k)}b_{2,j}
The coefficient of D_{1,2}^{k,j} in the first sum is:
(-1)^{j+1}b_{1,j}(-1)^{\delta(k,j)}b_{2,k}
So together, we have
(-1)^{k+\delta(j,k)+1}b_{1,k}b_{2,j}+ (-1)^{j+\delta(k,j)+1}b_{2,k}b_{2,j}

We do the same for the terms in the second sum. The coefficient of D_{1,2}^{j,k} in the second sum is:
(-1)^{k+2}b_{2,k}(-1)^{\delta(j,k)}b_{1,j}
The coefficient of D_{1,2}^{k,j} in the second sum is:
(-1)^{j+2}b_{2,j}(-1)^{\delta(k,j)}b_{1,k}
So together we have
(-1)^{k+\delta(j,k)+2}b_{2,j}b_{1,j}b_{2,k} + (-1)^{j+\delta(k,j)+2}b_{2,j}b_{1,k}

In order that both sums are equal, it suffices to show that
(-1)^{j+\delta(k,j)+2}=(-1)^{k+\delta(j,k)+1}
Assume first that k<j. Then \delta(k,j) is the number of elements in \{1,...,k-1\}\setminus \{j\} and this is k-1. So the left-hand side becomes
(-1)^{j+k+1}
If k<j, then \delta(j,k) is the number of elements in \{1,...,j-1\}\setminus \{k\} and this is j-2. So the right hand side becomes
(-1)^{k+j-1}
Clearly, the left-hand side equals the right-hand side.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
8K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
11
Views
254K
  • · Replies 38 ·
2
Replies
38
Views
9K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 14 ·
Replies
14
Views
6K