What is the connection between cofactors, determinants, and matrix inverses?

In summary: The determinant works just the same way--linear in each argument, and equaling zero when the parallelotope flattens. (Hint: the 3x3 determinant is precisely the scalar triple product :wink:) Determinants are, in some sense, a generalization of the scalar triple product to an arbitrary number of... vectors?In summary, determinants are a way to calculate the volume and orientation of a bunch of vectors.
  • #1
gfd43tg
Gold Member
950
50
I have been reviewing linear algebra for my FE exam, and I was thinking about cofactors. What are these strange things? It totally mystifies me that you can make a cofactor matrix from a matrix A (where the does alternating +/- come from??), transpose it, find the determinant (I still don't understand what this thing is, just something I know how to calculate), then divide that determinant by the adjoint to find the inverse of the original matrix A. How in the heck do all these things relate to each other? What is the underlying meaning here other than to say that you can calculate the matrix inverse. It looks like magic to me!
 
Physics news on Phys.org
  • #2
You can start with an invertible ##n\times n## matrix ##A=(a_{ij})_{i,j}## and try to solve the equation ##A \cdot X = 1## with another invertible matrix ##X##. This gives you ##n^2## linear equations with ##n^2## unknown entries. In the end you will have the formula you mentioned. Instead it is probably smarter to start with small ##n## and see how it goes. Then making a proof by induction and you will end up again with the formula.
 
  • #3
One preliminary note--as every linear algebra student should know, the determinant of a matrix is equal to the signed volume of the parallelotope formed by its columns. When the columns are linearly independent, the parellelotope "flattens" and has zero volume. (It helps to imagine it in two or three dimensions.)

Consider a set of ##n## vectors in ##n##-dimensional space. We will take these vectors ##\mathbf{v}_i## to be the columns of our (square) matrix.
Now, let ##\mathbf{v}## be arbitrary. Then define a map ##f: \mathbb{R}^n\rightarrow\mathbb{R}## via the equation equation ##f(\mathbf{v})=\det [\mathbf{v\ v}_2\ \ldots\ \mathbf{v}_n]##. Because ##\det## is linear in the columns of a given matrix, ##f## is a linear functional on ##\mathbb{R}^n## with kernel ##\mathrm{span}(\{\mathbf{v}_2,\ldots,\mathbf{v}_n\})##. Also, ##f(\mathbf{v}_1)## is the determinant of the original matrix.

Finally, remember that, after choosing a basis, every linear map from ##\mathbb{R}^m## to ##\mathbb{R}^n## has a matrix associated to it. In fact, the matrix associated to ##f## is a row vector whose elements are precisely the cofactors along the first column.

This interpretation is rather abstract, and fits within the context of multilinear algebra. The early pioneers of linear algebra probably came up with cofactors as mere tools for computing determinants--but in any case, it turns out they have a very elegant interpretation in modern mathematics.
 
  • Like
Likes fresh_42
  • #4
suremarc, I have no idea what the heck you wrote. That is way over my head!
 
  • #5
Maylis said:
suremarc, I have no idea what the heck you wrote. That is way over my head!
Err, oops, I must have gotten carried away. Which part is confusing you?
 
  • #6
suremarc said:
Err, oops, I must have gotten carried away. Which part is confusing you?
Basically none of it. But I do find it interesting about the part you said about the determinant being equal to the volume of a parallelepiped. Maybe you can expand on that (please no crazy abstract math if possible). Just forget everything from the second paragraph on.

Also, its weird that the volume of a parallelepiped has ANYTHING to do with an inverse of a matrix.
 
  • #7
Maylis said:
Basically none of it. But I do find it interesting about the part you said about the determinant being equal to the volume of a parallelepiped. Maybe you can expand on that (please no crazy abstract math if possible). Just forget everything from the second paragraph on.
Hmm. I had figured that most of the things I mentioned are taught in linear algebra. Maybe the curriculum for engineering majors is different.

Consider a parallelepiped formed by 3 vectors, like this one:
img3345.png

Its volume is, up to a change of sign, equal to ##\mathbf{a\cdot(b\times c)}##, also known as the scalar triple product. Scaling either of ##\mathbf{a,b,}## or ##\mathbf{c}## scales the volume by the same amount.
The formula ##V=\mathbf{a\cdot(b\times c)}## mostly works, but sometimes we get negative values. This happens because our formula also depends on the orientation of ##\mathbf{a,b,}## and ##\mathbf{c}##, as per the nature of the dot and cross products. Loosely speaking, the scalar triple product tells us about the volume and the orientation of a triple of vectors.
In addition, allowing the volume to be signed gives us linearity: ##V(\mathbf{a_1+a_2, b, c})=V(\mathbf{a_1, b, c})+V(\mathbf{a_2, b, c})##. The same is not true in general when we take ##V## to be the absolute value.

The determinant works just the same way--linear in each argument, and equaling zero when the parallelotope flattens. (Hint: the 3x3 determinant is precisely the scalar triple product :wink:) Determinants are, in some sense, a generalization of the scalar triple product to an arbitrary number of dimensions.
 
  • Like
Likes gfd43tg
  • #8
so then ##a## is whatever element you choose, and ##b \times c## is the cofactor? I learned about spans and kernels when I took linear algebra in 2012, but I am not in the mood to learn it again right now.
 
  • #9
Look at things this way : given an ##n\times n## matrix ##A##, with real coefficients for exemple, its determinant is the determinant of the column vectors in the canonical basis ##{\cal B}## of ##M_{n,1}(\mathbb{R})##, which you could write ## \det A = \det_{\cal B} (C_1,...,C_n)##.

With the multilinearity and alternating property of the determinant of a family of ##n## vectors in a vector space of dimension ##n##, you can write :

##\det A = \sum_{i = 1}^n C_{i,j} \det_{\cal B} (C_1,...,C_{j-1}, e_i, C_{j+1},..,C_n)##.

And the determinant in the sum is what you call a cofactor with respect to position ##(i,j)##. Now let's evaluate this cofactor :

##\begin{align*}
\det_{\cal B} (C_1,...,C_{j-1}, e_i, C_{j+1},..,C_n) =& (-1)^{n-j} \det_{\cal B} (C_1,...,C_{j-1}, C_{j+1},..,C_n,e_i) \\
=& (-1)^{n -j} \det (B_{i,j}) \quad\quad \quad (*) \\
= & (-1)^{n -j} (-1)^{n -i} \Delta_{i,j} \\
=& (-1)^{i+j}\Delta_{i,j}
\end{align*}
##

##(*)## : ##B_{i,j}## is the transposed matrix of the matrix filled with columns ## (C_1,...,C_{j-1}, C_{j+1},..,C_n,e_i) ##. You have that the last row of ##B_{i,j}## contains only one entry that is 1 and the others are zeros.

At this point ##\Delta_{ij}## is the determinant of ##B_{ij}## modulo ##(n-i)## successive column transpositions, so that the n-th column is zero except the last term that is 1. So you get the formula for columnwise expansion of the determinant.

______

For the inverse, you can notice that if ##A## is invertible, the family ##{\cal C} = (C_1,...,C_n)## forms a basis of ##M_{n,1}(\mathbb{R})##. Given a vector ##U = {}^T (u_1,..,u_n)## given in canonical basis ##{\cal B}##, you have

## \begin{align*}
\det_{\cal C} (C_1,...,C_{j-1},U,C_{j+1},...,C_n) =& \det_{\cal C}({\cal B}) \det_{\cal B}(C_1,...,C_{j-1},U,C_{j+1},...,C_n) \\
= & \frac{1}{\det_{\cal B}({\cal C})} \sum_{i=1}^n u_i (\text{cof}(A))_{i,j} \\
= & \frac{1}{\det (A) } ({}^T \text{cof}(A) U)_j \\
\end{align*}##Replace ##U ## with ##C_i={}^T(a_{1i},...,a_{ni})## : ## \frac{1}{\det (A) } ({}^T \text{cof}(A) C_i)_j = \delta_{ij}##
and then ## \frac{1}{\det (A) } {}^T \text{cof}(A) A = I_n##
 

1. What are cofactors and determinants?

Cofactors and determinants are mathematical concepts used in linear algebra to solve systems of equations and find the inverse of a matrix. A cofactor is the coefficient of a particular term in a matrix, while a determinant is a scalar value that represents the "size" or "volume" of a matrix.

2. How do cofactors and determinants relate to each other?

Cofactors and determinants are closely related, as the determinant of a matrix is calculated using cofactors. Specifically, the determinant of a matrix can be found by multiplying the values of each element in a row or column by its corresponding cofactor and then summing these products.

3. What is the purpose of finding cofactors and determinants?

Finding the cofactors and determinants of a matrix allows us to solve systems of equations and find the inverse of a matrix, which is useful in many applications such as solving physical problems, analyzing data, and designing algorithms.

4. How are cofactors and determinants used in real life?

Cofactors and determinants have many real-life applications, including but not limited to: calculating the volume of a shape in three-dimensional space, solving electrical circuit problems, analyzing economic data, and finding the best fit line for a set of data points.

5. Are there any shortcuts or tricks for finding cofactors and determinants?

Yes, there are several shortcuts and tricks for finding cofactors and determinants, such as using the Laplace expansion method or the Sarrus rule for finding determinants of 3x3 matrices. However, these methods may not always be applicable and it is important to understand the underlying concepts of cofactors and determinants in order to use these shortcuts effectively.

Similar threads

Replies
34
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
1K
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • General Math
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
929
  • Linear and Abstract Algebra
Replies
7
Views
585
Replies
3
Views
1K
Back
Top