Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Cofactors & Determinants

  1. Feb 20, 2016 #1

    Maylis

    User Avatar
    Gold Member

    I have been reviewing linear algebra for my FE exam, and I was thinking about cofactors. What are these strange things? It totally mystifies me that you can make a cofactor matrix from a matrix A (where the does alternating +/- come from??), transpose it, find the determinant (I still don't understand what this thing is, just something I know how to calculate), then divide that determinant by the adjoint to find the inverse of the original matrix A. How in the heck do all these things relate to each other? What is the underlying meaning here other than to say that you can calculate the matrix inverse. It looks like magic to me!!
     
  2. jcsd
  3. Feb 20, 2016 #2

    fresh_42

    Staff: Mentor

    You can start with an invertible ##n\times n## matrix ##A=(a_{ij})_{i,j}## and try to solve the equation ##A \cdot X = 1## with another invertible matrix ##X##. This gives you ##n^2## linear equations with ##n^2## unknown entries. In the end you will have the formula you mentioned. Instead it is probably smarter to start with small ##n## and see how it goes. Then making a proof by induction and you will end up again with the formula.
     
  4. Feb 20, 2016 #3
    One preliminary note--as every linear algebra student should know, the determinant of a matrix is equal to the signed volume of the parallelotope formed by its columns. When the columns are linearly independent, the parellelotope "flattens" and has zero volume. (It helps to imagine it in two or three dimensions.)

    Consider a set of ##n## vectors in ##n##-dimensional space. We will take these vectors ##\mathbf{v}_i## to be the columns of our (square) matrix.
    Now, let ##\mathbf{v}## be arbitrary. Then define a map ##f: \mathbb{R}^n\rightarrow\mathbb{R}## via the equation equation ##f(\mathbf{v})=\det [\mathbf{v\ v}_2\ \ldots\ \mathbf{v}_n]##. Because ##\det## is linear in the columns of a given matrix, ##f## is a linear functional on ##\mathbb{R}^n## with kernel ##\mathrm{span}(\{\mathbf{v}_2,\ldots,\mathbf{v}_n\})##. Also, ##f(\mathbf{v}_1)## is the determinant of the original matrix.

    Finally, remember that, after choosing a basis, every linear map from ##\mathbb{R}^m## to ##\mathbb{R}^n## has a matrix associated to it. In fact, the matrix associated to ##f## is a row vector whose elements are precisely the cofactors along the first column.

    This interpretation is rather abstract, and fits within the context of multilinear algebra. The early pioneers of linear algebra probably came up with cofactors as mere tools for computing determinants--but in any case, it turns out they have a very elegant interpretation in modern mathematics.
     
  5. Feb 20, 2016 #4

    Maylis

    User Avatar
    Gold Member

    suremarc, I have no idea what the heck you wrote. That is way over my head!
     
  6. Feb 20, 2016 #5
    Err, oops, I must have gotten carried away. Which part is confusing you?
     
  7. Feb 20, 2016 #6

    Maylis

    User Avatar
    Gold Member

    Basically none of it. But I do find it interesting about the part you said about the determinant being equal to the volume of a parallelepiped. Maybe you can expand on that (please no crazy abstract math if possible). Just forget everything from the second paragraph on.

    Also, its weird that the volume of a parallelepiped has ANYTHING to do with an inverse of a matrix.
     
  8. Feb 20, 2016 #7
    Hmm. I had figured that most of the things I mentioned are taught in linear algebra. Maybe the curriculum for engineering majors is different.

    Consider a parallelepiped formed by 3 vectors, like this one:
    img3345.png
    Its volume is, up to a change of sign, equal to ##\mathbf{a\cdot(b\times c)}##, also known as the scalar triple product. Scaling either of ##\mathbf{a,b,}## or ##\mathbf{c}## scales the volume by the same amount.
    The formula ##V=\mathbf{a\cdot(b\times c)}## mostly works, but sometimes we get negative values. This happens because our formula also depends on the orientation of ##\mathbf{a,b,}## and ##\mathbf{c}##, as per the nature of the dot and cross products. Loosely speaking, the scalar triple product tells us about the volume and the orientation of a triple of vectors.
    In addition, allowing the volume to be signed gives us linearity: ##V(\mathbf{a_1+a_2, b, c})=V(\mathbf{a_1, b, c})+V(\mathbf{a_2, b, c})##. The same is not true in general when we take ##V## to be the absolute value.

    The determinant works just the same way--linear in each argument, and equaling zero when the parallelotope flattens. (Hint: the 3x3 determinant is precisely the scalar triple product :wink:) Determinants are, in some sense, a generalization of the scalar triple product to an arbitrary number of dimensions.
     
  9. Feb 20, 2016 #8

    Maylis

    User Avatar
    Gold Member

    so then ##a## is whatever element you choose, and ##b \times c## is the cofactor? I learned about spans and kernels when I took linear algebra in 2012, but I am not in the mood to learn it again right now.
     
  10. Feb 21, 2016 #9
    Look at things this way : given an ##n\times n## matrix ##A##, with real coefficients for exemple, its determinant is the determinant of the column vectors in the canonical basis ##{\cal B}## of ##M_{n,1}(\mathbb{R})##, which you could write ## \det A = \det_{\cal B} (C_1,...,C_n)##.

    With the multilinearity and alternating property of the determinant of a family of ##n## vectors in a vector space of dimension ##n##, you can write :

    ##\det A = \sum_{i = 1}^n C_{i,j} \det_{\cal B} (C_1,...,C_{j-1}, e_i, C_{j+1},..,C_n)##.

    And the determinant in the sum is what you call a cofactor with respect to position ##(i,j)##. Now let's evaluate this cofactor :

    ##\begin{align*}
    \det_{\cal B} (C_1,...,C_{j-1}, e_i, C_{j+1},..,C_n) =& (-1)^{n-j} \det_{\cal B} (C_1,...,C_{j-1}, C_{j+1},..,C_n,e_i) \\
    =& (-1)^{n -j} \det (B_{i,j}) \quad\quad \quad (*) \\
    = & (-1)^{n -j} (-1)^{n -i} \Delta_{i,j} \\
    =& (-1)^{i+j}\Delta_{i,j}
    \end{align*}
    ##

    ##(*)## : ##B_{i,j}## is the transposed matrix of the matrix filled with columns ## (C_1,...,C_{j-1}, C_{j+1},..,C_n,e_i) ##. You have that the last row of ##B_{i,j}## contains only one entry that is 1 and the others are zeros.

    At this point ##\Delta_{ij}## is the determinant of ##B_{ij}## modulo ##(n-i)## successive column transpositions, so that the n-th column is zero except the last term that is 1. So you get the formula for columnwise expansion of the determinant.

    ______

    For the inverse, you can notice that if ##A## is invertible, the family ##{\cal C} = (C_1,...,C_n)## forms a basis of ##M_{n,1}(\mathbb{R})##. Given a vector ##U = {}^T (u_1,..,u_n)## given in canonical basis ##{\cal B}##, you have

    ## \begin{align*}
    \det_{\cal C} (C_1,...,C_{j-1},U,C_{j+1},...,C_n) =& \det_{\cal C}({\cal B}) \det_{\cal B}(C_1,...,C_{j-1},U,C_{j+1},...,C_n) \\
    = & \frac{1}{\det_{\cal B}({\cal C})} \sum_{i=1}^n u_i (\text{cof}(A))_{i,j} \\
    = & \frac{1}{\det (A) } ({}^T \text{cof}(A) U)_j \\
    \end{align*}##


    Replace ##U ## with ##C_i={}^T(a_{1i},...,a_{ni})## : ## \frac{1}{\det (A) } ({}^T \text{cof}(A) C_i)_j = \delta_{ij}##
    and then ## \frac{1}{\det (A) } {}^T \text{cof}(A) A = I_n##
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Cofactors & Determinants
  1. Forms and determinants (Replies: 1)

  2. The Determinant (Replies: 3)

Loading...