What is the determinant of a block matrix?

Click For Summary
The discussion focuses on understanding the determinant of a block matrix defined as Σ(j) = [σjj, σ(j)'; σ(j), Σ(2)]. The key result is that the determinant can be expressed as |Σ(j)| = |Σ(2)|(σjj - σ(j)'Σ(2)⁻¹σ(j)). Participants suggest that working through examples with smaller matrices can help clarify the formula's derivation. A method proposed involves left-multiplying the matrix by a specific transformation to simplify the determinant calculation. Ultimately, the determinant is confirmed to be a straightforward result when approached correctly, despite initially appearing complex.
maverick280857
Messages
1,774
Reaction score
5
Hi,

I've been trying to get my head around this. \Sigma_{(j)} is a p x p matrix given by

\Sigma_{(j)} = \left(\begin{array}{cc}\sigma_{jj} & \boldsymbol{\sigma_{(j)}'}\\\boldsymbol{\sigma_{(j)}} & \boldsymbol{\Sigma_{(2)}}\end{array}\right)

where \sigma_{jj} is a scalar, \boldsymbol{\sigma_{(j)}} is a (p-1)x1 column vector, and \boldsymbol{\Sigma_{(2)}} is a (p-1)x(p-1) matrix.

The result I can't understand is

|\Sigma_{(j)}| = |\Sigma_{(2)}|(\sigma_{jj} - \boldsymbol{\sigma_{(j)}'\Sigma_{2}^{-1}\sigma_{(j)}})

where |.| denotes the determinant. How does one get this? It seems to be consistent, but I don't 'see' how it is obvious. I searched the internet for results on determinants of block matrices but all I got was stuff for [a b;c d] where a, b, c, d are all n x n matrices, in which case the determinant is just det(ad-bc).

Any inputs would be appreciated.

Thanks in advance!
 
Physics news on Phys.org
Someone?
 
Have you worked any examples? The best way to understand how something (a proof, a theorem, a process) works is to repeat it yourself. Try it for a 3x3 matrix then a 4x4 and see if you can identify the specific machinery which permits this formula.
 
Hmm, I can think of one way you could prove this, but it might not be the best or most 'obvious' way. Still, better than nothing.

Left-Multiply your matrix by

\left(\begin{array}{cc}1/\sigma_{jj} & \boldsymbol{0}\\\boldsymbol{0} & \boldsymbol{\Sigma_{(2)}^{-1}}\end{array}\right)

And see what you get. You can then work out the determinant using the determinant-of-products rule.
 
Thanks everyone who replied. It turns out that the thing is rather simple:

|\Sigma_{(j)}| = \sigma_{jj}|\Sigma_{(2)}| - \sigma'_{(j)}adj(\Sigma_{(2)})\sigma_{(j)}

(noting that the (1,2)th 'element' is actually a row, and using the usual minor-cofactor expansion of the determinant)

Then the final step involves writing the adjoint as a product of the inverse and the (scalar) determinant, which is factored out. I admit though that this is more of a backward proof, than a derivation-based forward proof.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K