What is the Log Concavity of Determinants for Positive Definite Matrices?

In summary, Log concavity of determinants refers to the property of a determinant that states that the natural logarithm of the determinant is a concave function of its entries. This allows us to use convex optimization techniques to solve problems and has practical applications in fields such as economics, engineering, and statistics. It is closely related to the concept of positive definiteness and can be tested using methods such as the Hessian matrix or numerical methods like the Newton-Raphson method.
  • #1
Euge
Gold Member
MHB
POTW Director
2,052
207
Let ##A, B\in M_n(\mathbb{R})## be positive definite matrices. Prove that for ##0\le t\le 1##, $$\det(tA + (1 - t)B) \ge \det(A)^t\det(B)^{1-t}$$
 
  • Like
Likes Greg Bernhardt and topsquark
Physics news on Phys.org
  • #2
The case ##t=0## is considered separately. Obviously

$$
\det B \geq (\det A)^0 \det B .
$$

We now consider the case ##0 < t \leq 1##. I start with

$$
\det (t A + (1-t) B) \geq (\det A)^t (\det B)^{1-t}
$$

and work backwards to an inequality that can be seen to hold. Take the log of both sides

$$
\ln \det ( t A + (1-t) B) \geq t \ln \det A + (1-t) \ln \det B
$$

where we have used that ##\det A > 0## and ##\det B > 0##. As ##\det A \not= 0## the matrix ##A## has an inverse and the above inequality becomes

$$
\ln \det [tA ( \mathbb{1} + t^{-1} (1-t) A^{-1} B)] \geq t \ln \det A + (1-t) \ln \det B
$$

or

$$
\ln \det (t A) + \ln \det ( \mathbb{1} + t^{-1} (1-t) A^{-1} B) \geq t \ln \det A + (1-t) \ln \det B
$$

or

$$
nt + \ln \det ( \mathbb{1} + t^{-1} (1-t) A^{-1} B) \geq (1-t) (\ln \det B - \ln \det A)
$$

or

$$
nt + \ln \det ( \mathbb{1} + t^{-1} (1-t) A^{-1} B) \geq (1-t) \ln \det A^{-1} B
$$

The matrix ##A^{-1} B## is not necessarily symmetric and so can't in general be diagonalised, but there is a similarity transformation that puts it into Jordan normal form. So that

$$
nt + \ln \det [S^{-1} ( \mathbb{1} + t^{-1} (1-t) A^{-1} B) S] \geq (1-t) \ln \det (S^{-1} A^{-1} B S)
$$

becomes

$$
nt + \ln \det ( \mathbb{1} + t^{-1} (1-t) J) \geq (1-t) \ln \det J
$$

Which, in terms of eigenvalues is

$$
nt + \ln \prod_{i=1}^n ( 1 + t^{-1} (1-t) \lambda_i^{-1} \mu_i) \geq (1-t) \ln (\prod_{i=1}^n \lambda_i^{-1} \mu_i)
$$

where ##\lambda_i## are the eigenvalues of ##A## and ##\mu_i## are the eigenvalues of ##B##. These eigenvalues are positive and so the logarithm makes sense. The above inequality can be rewritten as

$$
nt + \sum_{i=1}^n ( 1 + t^{-1} (1-t) \lambda_i^{-1} \mu_i) \geq (1-t) \sum_{i=1}^n \lambda_i^{-1} \mu_i
$$

or

\begin{align*}
nt + \sum_{i=1}^n 1 & \geq [(1-t) - t^{-1} (1-t)] \sum_{i=1}^n \lambda_i^{-1} \mu_i
\nonumber \\
& = - (1-t)^2 t^{-1} \sum_{i=1}^n \lambda_i^{-1} \mu_i
\end{align*}

which obviously holds as the RHS is non-positive for ##t \in (0,1]## as ##\lambda_i^{-1} \mu_i## (##i = 1 , \dots , n##) is positive.
 
  • Like
Likes topsquark
  • #3
I think I spotted an error in the argument of my previous post, but I think I know how to fix it.

EDIT: on 9th Sept I have added further explanation.

In my previous post I demonstrated that

$$
\det (t A + (1-t) B) \geq (\det A)^t (\det B)^{1-t}
$$

reduces to

$$
nt + \ln \det ( \mathbb{1} + t^{-1} (1-t) A^{-1} B) \geq (1-t) \ln \det A^{-1} B \qquad (*)
$$

A positive definite matrix has a positive definite square-root matrix. Proof:

The matrix ##B## is positive definite and so there is a similarity transformation such that ##S^{-1} B S = D## where ##D## is the diagonal matrix. Where the matrix ##S## is chosen to be the matrix with the ##n## eigenvectors as columns. The elements of ##D## are eigenvalues of ##B## and so are positive. When a matrix is symmetric, the diagonalizing matrix ##S## can be made an orthogonal matrix by suitably choosing the eigenvectors (then ##S^{-1} = S^T##). As such we can define a positive definite square-root matrix ##B^{1/2} = S D^{1/2} S^T## (##B^{1/2}## has the same eigenvectors as ##B## but with eigenvalues that are the square-root of the eigenvalues of ##B##).
Q.E.D.

Now, the matrix ##A^{-1} B## can be written as ##B^{-1/2} (B^{1/2} A^{-1} B^{1/2}) B^{1/2}##. As such the matrix ##A^{-1} B## has the same eigenvalues as the positive definite matrix ##B^{1/2} A^{-1} B^{1/2}## because:

$$
\det (\mathbb{1} \lambda - A^{-1} B) = \det (\mathbb{1} \lambda - B^{-1/2} (B^{1/2} A^{-1} B^{1/2}) B^{1/2}) = \det (\mathbb{1} \lambda - B^{1/2} A^{-1} B^{1/2})
$$

As such the matrix ##A^{-1} B## has positive eigenvalues, let us denote them as ##\nu_i##.

The matrix ##A^{-1} B## is not necessarily symmetric and so can't in general be diagonalised, but there is a similarity transformation that puts it into Jordan normal form. So that ##(*)## becomes

$$
nt + \ln \det ( \mathbb{1} + t^{-1} (1-t) J) \geq (1-t) \ln \det J
$$

where the diagonal elements of ##J## are the ##\nu_i > 0## (##i = 1 , \dots , n##). This reduces to

$$
nt + \ln \prod_{i=1}^n ( 1 + t^{-1} (1-t) \nu_i) \geq (1-t) \ln (\prod_{i=1}^n \nu_i)
$$

These eigenvalues are positive and so the logarithm makes sense. The above inequality can be rewritten as

$$
nt + \sum_{i=1}^n ( 1 + t^{-1} (1-t) \nu_i) \geq (1-t) \sum_{i=1}^n \nu_i
$$

or

\begin{align*}
nt + \sum_{i=1}^n 1 & \geq [(1-t) - t^{-1} (1-t)] \sum_{i=1}^n \nu_i
\nonumber \\
& = - (1-t)^2 t^{-1} \sum_{i=1}^n \nu_i
\end{align*}

which obviously holds as the RHS is non-positive for ##t \in (0,1]## as ##\nu_i## (##i = 1 , \dots , n##) is positive.
 
Last edited:
  • #4
I have made a mistake in the previous attempts (inattention!). I think this time I have done the problem.

I start with

$$
\det (t A + (1-t) B) \geq (\det A)^t (\det B)^{1-t}
$$

and work backwards to an inequality that can be proven to hold. Take the log of both sides

$$
\ln \det ( t A + (1-t) B) \geq t \ln \det A + (1-t) \ln \det B
$$

where we have used that ##\det A > 0## and ##\det B > 0##. As ##\det A \not= 0## the matrix ##A## has an inverse and the above inequality becomes

$$
\ln \det [A ( t\mathbb{1} + (1-t) A^{-1} B)] \geq t \ln \det A + (1-t) \ln \det B
$$

or

$$
\ln \det A + \ln \det ( t \mathbb{1} + (1-t) A^{-1} B) \geq t \ln \det A + (1-t) \ln \det B
$$

or

$$
\ln \det ( t \mathbb{1} + (1-t) A^{-1} B) \geq (1-t) (\ln \det B - \ln \det A)
$$

or

$$
\ln \det ( t \mathbb{1} + (1-t) A^{-1} B) \geq (1-t) \ln \det A^{-1} B \qquad (*)
$$

A positive definite matrix has a positive definite square-root matrix. Proof:

The matrix ##B## is positive definite and so there is a similarity transformation such that ##S^{-1} B S = D## where ##D## is the diagonal matrix. Where the matrix ##S## is chosen to be the matrix with the ##n## eigenvectors as columns. The elements of ##D## are eigenvalues of ##B## and so are positive. When a matrix is symmetric, the diagonalizing matrix ##S## can be made an orthogonal matrix by suitably choosing the eigenvectors (then ##S^{-1} = S^T##). As such we can define a positive definite square-root matrix ##B^{1/2} = S D^{1/2} S^T## (##B^{1/2}## has the same eigenvectors as ##B## but with eigenvalues that are the square-root of the eigenvalues of ##B##).
Q.E.D.

Now, the matrix ##A^{-1} B## can be written as ##B^{-1/2} (B^{1/2} A^{-1} B^{1/2}) B^{1/2}##. As such the matrix ##A^{-1} B## has the same eigenvalues as the positive definite matrix ##B^{1/2} A^{-1} B^{1/2}## because:

$$
\det (\mathbb{1} \lambda - A^{-1} B) = \det (\mathbb{1} \lambda - B^{-1/2} (B^{1/2} A^{-1} B^{1/2}) B^{1/2}) = \det (\mathbb{1} \lambda - B^{1/2} A^{-1} B^{1/2})
$$

As such the matrix ##A^{-1} B## has positive eigenvalues, let us denote them as ##\nu_i##.

The matrix ##A^{-1} B## is not necessarily symmetric and so can't in general be diagonalised, but there is a similarity transformation that puts it into Jordan normal form. So that ##(*)## becomes

$$
\ln \det ( t \mathbb{1} + (1-t) J) \geq (1-t) \ln \det J
$$

where the diagonal elements of ##J## are the ##\nu_i > 0## (##i = 1 , \dots , n##). This reduces to

$$
\ln \prod_{i=1}^n ( t + (1-t) \nu_i) \geq (1-t) \ln (\prod_{i=1}^n \nu_i)
$$

These eigenvalues are positive and so the logarithm makes sense. The above inequality can be rewritten as

$$
\sum_{i=1}^n \ln ( t + (1-t) \nu_i) \geq (1-t) \sum_{i=1}^n \ln \nu_i .
$$

In order to prove this inequality we introduce the function:

$$
f(t) = \sum_{i=1}^n \ln ( t + (1-t) \nu_i) - (1-t) \sum_{i=1}^n \ln \nu_i .
$$

First note that ##f(0) = f(1) = 0##. The first derivative is

$$
f'(t) = \sum_{i=1}^n \frac{1- \nu_i}{\nu_i + t (1- \nu_i)} + \sum_{i=1}^n \ln \nu_i
$$

and so the second derivative is

$$
f'' (t) = - \sum_{i=1}^n \frac{(1- \nu_i)^2}{(\nu_i + t (1- \nu_i))^2} < 0 .
$$

As ##f(t)## is twice-differentiable and ##f''(t)## is non-positive the function is concave. Combining this with ##f(0) = f(1) = 0##, we have

$$
f (t) = f ((1-t) \cdot 0 + t \cdot 1) \geq (1-t) f(0) + t f(1) = 0
$$

for ##t \in [0,1]##.
 
Last edited:
  • Like
Likes Euge
  • #5
For every ##t\in [0,1]##, ##tA + (1 - t)B## is real positive definite. Hence
$$
\frac{\pi^{n/2}}{\sqrt{\det(tA + (1 - t)B)}} = \int_{\mathbf{R}^n} e^{-\mathbf{x}\cdot (tA + (1 - t)B)\mathbf{x}}\, d\mathbf{x} = \int_{\mathbf{R}^n} (e^{-\mathbf{x}\cdot A\mathbf{x}})^t (e^{-\mathbf{x}\cdot B\mathbf{x}})^{1-t}\, d\mathbf{x}
$$
By Hölder's inequality applied with conjugate exponents ##1/t## and ##1/(1-t)##, the latter integral is dominated by
$$
\left(\int_{\mathbf{R}^n} e^{-\mathbf{x}\cdot A\mathbf{x}}\, d\mathbf{x}\right)^t \left(\int_{\mathbf{R}^n} e^{-\mathbf{x}\cdot B\mathbf{x}}\, d\mathbf{x}\right)^{1-t}
$$
The latter expression equals
$$
\left(\frac{\pi^{n/2}}{\sqrt{\det A}}\right)^t \left(\frac{\pi^{n/2}}{\sqrt{\det B}}\right)^{1-t} = \frac{\pi^{n/2}}{\sqrt{(\det A)^t (\det B)^{1-t}}}
$$
Thus
$$
\frac{\pi^{n/2}}{\sqrt{\det(tA + (1 - t)B)}} \le \frac{\pi^{n/2}}{\sqrt{(\det A)^t (\det B)^{1-t}}}
$$
which is equivalent to the desired inequality.
 
  • Like
Likes Infrared, Office_Shredder and julian
  • #6
Wow, that's a ridiculously slick proof.
 
  • #7

1. What is the definition of log concavity of determinants?

The log concavity of determinants refers to the property of a matrix where the logarithm of its determinant is a concave function of its entries. In other words, if the entries of a matrix are changed in a specific way, the logarithm of its determinant will always decrease or stay the same.

2. How is log concavity of determinants useful in mathematics?

Log concavity of determinants is useful in various areas of mathematics, such as linear algebra, optimization, and probability. It allows for the simplification and analysis of complex mathematical problems, and also has applications in fields such as economics and statistics.

3. What are some properties of log concave determinants?

There are several properties of log concave determinants, including:

  • If a matrix has a log concave determinant, then all of its principal minors (determinants of submatrices) are also log concave.
  • The sum and product of two log concave determinants is also a log concave determinant.
  • If a matrix has a log concave determinant, then its inverse also has a log concave determinant.

4. How is log concavity of determinants related to positive definiteness?

A matrix with a log concave determinant is always positive definite, meaning all of its eigenvalues are positive. However, the converse is not always true - a positive definite matrix may not have a log concave determinant. Therefore, log concavity of determinants is a stricter condition than positive definiteness.

5. Can log concavity of determinants be extended to higher dimensions?

Yes, log concavity of determinants can be extended to higher dimensions. In fact, it is a property that holds for matrices of any size, as long as the entries are real numbers. However, the calculations and proofs become more complex as the dimension of the matrix increases.

Similar threads

  • Math POTW for University Students
Replies
10
Views
704
  • Math POTW for Graduate Students
Replies
1
Views
468
Replies
2
Views
3K
  • Math POTW for Graduate Students
Replies
1
Views
707
  • Linear and Abstract Algebra
Replies
1
Views
911
  • Calculus and Beyond Homework Help
Replies
25
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
887
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
984
Back
Top