Proving a formula for special determinants

In summary: ####\begin{aligned}&-1 & &-1 \\&1 & &\sigma^{2}(1)+\sigma^{1}(2) \\&\sigma^{2}(1)+\sigma^{1}(3)\\&\sigma^{2}(2)+\sigma^{1}(4)\\&\sigma^{2}(3)+\sigma^{1}(5)\\&\sigma^{2}(4)+\sigma^{1}(6)\\&\sigma^{2}(5)+\sigma^{1}(7)\\&
  • #1
schniefen
178
4
TL;DR Summary
Show that the below determinant satisfies the given formula.
##\begin{vmatrix}
1 & 2 & 3 & ... & n \\
2 & 3 & 4 & ... & 1 \\
3 & 4 & 5 & ... & 2 \\
{\vdots}& {\vdots}& {\vdots} & {\vdots} & {\vdots} \\
n & 1 & 2 & ... & n-1 \notag
\end{vmatrix} = (-1)^{\frac{n(n-1)}{2}}\dfrac{n^n+n^{n-1}}{2}##

Consider the 3x3 case.

##\begin{vmatrix}
1 & 2 & 3 \\
2 & 3 & 1 \\
3 & 1 & 2 \notag
\end{vmatrix}##

What would be the strategy to solve it? One can of course subtract the second row from the third, and the first from the second, which gives

##\begin{vmatrix}
1 & 2 & 3 \\
1 & 1 & -2 \\
1 & -1 & -1 \notag
\end{vmatrix}##

Then expanding along the first column after one has taken the third row and subtracted it from the first and second...yields a somewhat messy algorithm.

Bonus question:

Show that

##\begin{vmatrix}
x & 0 & 0 & 0 & ... & 0& a_0 \\
-1 & x & 0 & 0 & ... & 0& a_1 \\
0 & -1 & x & 0 & ... &0 & a_2 \\
{\vdots}& {\vdots}& {\vdots} & {\vdots} & & {\vdots} & {\vdots} \\
0 & 0 & 0 &0& ... & -1 & a_{n-1}+x \notag
\end{vmatrix} = x^n+a_{n-1}x^{n-1}+...+a_0##
 
Physics news on Phys.org
  • #2
as for your bonus question -- this is a Companion matrix which is part of a small handful of canonical matrices... it's worth spending time to show that its characteristic polynomial is the monic polynomial 'encoded' in that right column. Given how sparse it is, you should have an inductive/recursive proof in mind for how to tackle this.

- - - - -
Your first problem looks like a bit of a trick problem-- I'll need to chew on it a bit, though off the top of my head, it looks like a Hankel matrix and you can convert it into a Toeplitz via mutliplication of a suitable permutation matrix if you'd like. (There is a enough special structure that the Toeplitz would be called a circulant matrix). Circulants are highly structured and we know the spectrum in closed from which in principle gives you the spectrum which is one way of getting the determinant, but via a very gory route. The Toeplitz matrix comes up a lot and is worth knowing about... wikipedia page is worthwhile.

There should be a clean-ish way to get the determinant.

edit:
in terms of pattern recognition, one thing that jumps out is if we sum over the first (or any) column
##1+2+3+...+n = \frac{1}{2}n(n+1) = \binom{n+1}{2}##

and your formula is
##\begin{vmatrix}
1 & 2 & 3 & ... & n \\
2 & 3 & 4 & ... & 1 \\
3 & 4 & 5 & ... & 2 \\
{\vdots}& {\vdots}& {\vdots} & {\vdots} & {\vdots} \\
n & 1 & 2 & ... & n-1 \notag
\end{vmatrix} = (-1)^{\frac{n(n-1)}{2}}\dfrac{n^n+n^{n-1}}{2} = (-1)^{\binom{n}{2}}\cdot n^{n-2}\cdot\binom{n+1}{2}##

 
Last edited:
  • Like
Likes schniefen
  • #3
I did not try it but maybe induction can work.
 
  • #4
Yes, induction should do, but this is a nasty solution. There is certainly an elegant way. All matrix entries are ##a_{ij}=\sigma^{i+j}(1)## with the shift operator mod n. It also has a nice LU decomposition, triangalization and diagonalization solutions, Cholesky less, see the matrix calculator here:
https://www.physicsforums.com/threa...h-physics-earth-and-other-curiosities.970262/
But the solution might be in the realm of ##\mathbb{Z}_n## as the diagonals are all permutations and the entries, too. The calculations in the norm section of Norm of an Extension Field look similar.
 
  • #5
ok, here's a sketch of way that has appeal to me, but the approach is rather pecuiliar and heavily based on based on pattern recognition. There's a lot of background material I draw on here, but it's how I would derive the identity from scratch. The fact that circulant matrices and Fourier transforms are ubiquitous makes this forgivable I think. (fingers crossed)

If you already know the formula (as stated in the problem) I guess induction is more direct but it seems unpleasant here.
so you want

##\det\big(\mathbf H\big) =
\begin{vmatrix}
1 & 2 & 3 & ... & n \\
2 & 3 & 4 & ... & 1 \\
3 & 4 & 5 & ... & 2 \\
{\vdots}& {\vdots}& {\vdots} & {\vdots} & {\vdots} \\
n & 1 & 2 & ... & n-1 \notag
\end{vmatrix} ##

step 0.) Do the bonus question first and get comfortable with the companion matrix. In particular get comfortable with using it for the polynomial ##x^n -1##

step 1.) multiply your Hankel matrix ##\mathbf H## by the reflection matrix ##\mathbf J## to 'convert' your Hankel matrix into a Toeplitz that is in fact a circulant matrix. i.e. ##\mathbf H \mathbf J = \mathbf S## or
##\mathbf H = \mathbf S\mathbf J^T = \mathbf S\mathbf J## since ##\mathbf J## is a permutation matrix and symmetric (or involutive)

See "Relation between Hankel and Toeplitz matrices" on wikipedia page https://en.wikipedia.org/wiki/Hankel_matrix

Note determinants multiply and ##\det\big(\mathbf J\big)## is easy to calculate
(a) as a hint it's involutive so you could just count the number of -1 eigenvalues which is implied by taking the trace... and you know that ##\text{trace}\big(\mathbf J\big) = 0## when n is even and ##\text{trace}\big(\mathbf J\big) = 1## when n is odd...
(b) just view ##\mathbf J## as a composition of disjoint pairwise swaps (transpositions), each of which has a permutation matrix with determinant -1 associated with it, so just count the number of these swaps...

step 2.) read this post from a couple years ago
https://www.physicsforums.com/threads/eigenvalues-of-circulant-matrices.927173/

step 3.) consider ##\sum_{k=1}^n k x^{k-1}##
for ##x = 1## this is ##\binom{n+1}{2}## as stated in my earlier post. For other ##x##, and in particular nth roots of unity other than 1, you can get this by differentiating the finite geometric series, or asking wolfram
https://www.wolframalpha.com/input/?i=sum+from+k+=+1+to+n+k+*+x^(k-1)

when you specialize to the remaining (n-1) distinct nth roots of unity (other than 1) this simplifies to
##\sum_{k=1}^n k x^{k-1} = \frac{-n}{1-x}= \frac{-n}{1-\omega^r}## for ##r \in\{1,2,...,n-1\}##

step 4.) The determinant of your circulant matrix is the product of eigenvalues, so
##\det\big(\mathbf S\big) = \binom{n+1}{2}\prod_{r=1}^{n-1}\frac{-n}{1-\omega^r} = (-1)^{n-1} n^{n-1}\binom{n+1}{2}\prod_{r=1}^{n-1}\frac{1}{1-\omega^r}##

step 5.)
*edited* to get the result via easier means of factoring instead of calculus

our polynomial
##p(x)= x^n - 1= q(x) \cdot (x-1) = (x-\omega^1)(x-\omega^2)...(x-\omega^{n-1}) \cdot (x-1)##

but synthetic division, or direct calculation tells us
##q(x)\cdot (x-1) = \big(x^{n-1} + x^{n-2} +... + x + 1\big)\cdot \big(x-1\big) = x^n - 1##
so
##q(x) =\big(x^{n-1} + x^{n-2} +... + x + 1\big) = \big(x-\omega^1\big)\big(x-\omega^2\big)...\big(x-\omega^{n-1}\big)##

and in particular
##q(1) = n = \big(1-\omega^1\big)\big(1-\omega^2\big)...\big(1-\omega^{n-1}\big) = \prod_{r=1}^{n-1} \big(1-\omega^r\big) ##

or
##\prod_{r=1}^{n-1}\frac{1}{1-\omega^r} = \frac{1}{n}##

so plugging this back in
##\det\big(\mathbf S\big) = (-1)^{n-1} n^{n-1}\binom{n+1}{2}\prod_{r=1}^{n-1}\frac{1}{1-\omega^r}= (-1)^{n-1} n^{n-2}\binom{n+1}{2}##

step 6.) fine tune the role of the sign function and the determinant of ##\mathbf J## (which is going to be +/- 1)

edit:
there's a very nasty but very small bug in the above related to zero vs 1 indexing. If you look at the implied circulant matrix ##\mathbf S## it has ##n## as the component on its diagonal, not ##1##.

i.e. with the list
##[1,2,3,...,n]##
the writeup above contemplates (see link from 2 years ago)
##\mathbf S = \sum_{k=0}^{n-1} s_k \mathbf P^k = 1 \mathbf I + 2 \mathbf P^1 + ... + n \mathbf P^{n-1}##

where ##s_0## would be the zeroth component in that list (i.e. value of 1), then ##s_1## the next component (value of 2) and so on,

but what actually occurs in the problem is
##\mathbf S = \sum_{k=1}^{n} s_k \mathbf P^k = 1 \mathbf P + 2 \mathbf P^2 + ... + n \mathbf P^{n}= \mathbf P\big(1 \mathbf I + 2 \mathbf P^1 + ... + n \mathbf P^{n-1}\big)##
i.e. grabbing components, where we start counting at one, so ##s_1 = 1##, ##s_2 = 2## and so on, observing that ##\mathbf P^n = \mathbf I##

This kind of thing is to be expected when I toggle 0 vs 1 indexing I suppose.

The good news is we are still dealing with nth roots of unity, ##\omega##, with just one extra multiplication by them for each respective eigenvalue. This obviously has no impact to the eigenvalue of 1. For the other nth roots, the problem becomes estimating
##\prod_{r=1}^{n-1}\frac{\omega^r}{1-\omega^r}= \big(\prod_{r=1}^{n-1}\frac{1}{1-\omega^r}\big)\big(\prod_{r=1}^{n-1}\omega^r\big) ##

so we can use our before calculation of ##\big(\prod_{r=1}^{n-1}\frac{1}{1-\omega^r}\big)##
and now we need an estimate of ##\big(\prod_{r=1}^{n-1}\omega^r\big) ##

for odd n, ##\big(\prod_{r=1}^{n-1}\omega^r\big) =1 ## since -1 is not an odd nth root of unity i.e. ##(-1)^n = -1 \neq 1## and every term in here is accordingly non-real, coming in conjugate pairs, each product of which is 1, thus the total product is 1.

for even n ##\big(\prod_{r=1}^{n-1}\omega^r\big) = -1##, by the same reasoning above, except the purely real -1 term shows up in there once and ##-1 \cdot 1 = - 1##.

It's a small sign error that got buried in some gorey details. Apologies.

This means for odd n
##\det\big(\mathbf S\big) = n^{n-2}\binom{n+1}{2}##
as before, and for even n it now should be
##\det\big(\mathbf S\big) = n^{n-2}\binom{n+1}{2}##

circling back, and using earlier hints
##\det\big(\mathbf J\big) = (-1)^{\frac{n}{2}}## for even n
##\det\big(\mathbf J\big) = (-1)^{\frac{n-1}{2}}## for odd n

but this is the same as the official answer... i.e. ##(-1)^\frac{n(n-1)}{2}## is just a symmetrized version of this.

One way to check is, for even n:
##\frac{n}{2}\%2 = \frac{n}{2}(n-1)\%2##
because n-1 is odd and equal to 1, mod 2

and for odd n
##\frac{n-1}{2}\%2= \frac{(n-1)}{2}n\%2##
because ##n## is odd and equal to 1, mod 2

so with a rather ugly bandaid at the end, this completes the derivation
 
Last edited:
  • #6
I don't have a solution either, but I did notice that one eigenvalue is ##\frac {(n+1)n}2##. So the product of the other eigenvalues must be ##\pm n^{n-2}##.
 
  • #7
Thanks for the feedback. After summing over the first or any column, how would one proceed to derive the terms ##(-1)^{\binom{n}{2}}## and ##n^{n-2}## in the formula below?

edit:
in terms of pattern recognition, one thing that jumps out is if we sum over the first (or any) column
##1+2+3+...+n = \frac{1}{2}n(n+1) = \binom{n+1}{2}##

and your formula is
##\begin{vmatrix}
1 & 2 & 3 & ... & n \\
2 & 3 & 4 & ... & 1 \\
3 & 4 & 5 & ... & 2 \\
{\vdots}& {\vdots}& {\vdots} & {\vdots} & {\vdots} \\
n & 1 & 2 & ... & n-1 \notag
\end{vmatrix} = (-1)^{\frac{n(n-1)}{2}}\dfrac{n^n+n^{n-1}}{2} = (-1)^{\binom{n}{2}}\cdot n^{n-2}\cdot\binom{n+1}{2}##
 
  • #8
schniefen said:
Thanks for the feedback. After summing over the first or any column, how would one proceed to derive the terms ##(-1)^{\binom{n}{2}}## and ##n^{n-2}## in the formula below?

so here's the thing, you have

##\mathbf H = \begin{bmatrix}
1 & 2 & 3 & ... & n \\
2 & 3 & 4 & ... & 1 \\
3 & 4 & 5 & ... & 2 \\
{\vdots}& {\vdots}& {\vdots} & {\vdots} & {\vdots} \\
n & 1 & 2 & ... & n-1 \end{bmatrix}##
where ##\mathbf H## is a Hankel matrix

consider the reflection matrix
##\mathbf J = \begin{bmatrix}
0 & 0 & 0 & ... & 1 \\
0 & 0 & 0 & ... & 0 \\
{\vdots}& {\vdots}& {\vdots} & {\vdots} & {\vdots} \\
0 & 1 & 0 & ... & 0 \\
1 & 0 & 0& ... & 0 \end{bmatrix}##
i.e. this is a permutation matrix with ones on the anti-diagonal

if you do a search of PF forums, you'll see I'm the only person who's ever talked about Hankel matrices. On the other hand, you'll see a lot of different people have mentioned Toeplitz matrices and Circulant matrices (a very special kind of Toeplitz).

So it seems logical to talk about something less estoric and thus try to map from Hankel -> Toeplitz. You should confirm to yourself that it's always the case that

## \mathbf H = \mathbf {SJ} ##
where ##\mathbf S## is toeplitz.

Since determinants multiply
##\det\big(\mathbf H\big) = \det\big(\mathbf {SJ}\big) = \det\big(\mathbf S\big)\det\big(\mathbf J\big) = \det\big(\mathbf S\big) \cdot (-1)^\binom{n}{2} ##

and you need to convince yourself that in fact
##\det\big(\mathbf J\big) = (-1)^\binom{n}{2}##
so this is a reasonable way of deriving the ## (-1)^\binom{n}{2}## term.

The problem then becomes calculating
##\det\big(\mathbf S\big)##

one approach is via eigenvalues, and yes ##\lambda = \binom{n+1}{2}## with associated eigenvector ##\mathbf 1## is one of them, e.g. very simplistically
##\binom{n+1}{2}\mathbf 1 = \mathbf H\mathbf 1 = \mathbf {SJ}\mathbf 1 = \mathbf S\big(\mathbf J\mathbf 1\big) = \mathbf S\mathbf 1##
(where we make use of the fact that the ones vector is an eigenvector for permutation matrices)

This is ultimately what I did in post 5. Since I knew the background items, I could eyeball most of it, though ultimately I still got bogged down in some very unpleasant indexing issues that needed surgery at the end.

There are likely a lot of other approaches though not much else comes to mind.
 
  • Like
Likes schniefen
  • #9
Since it's a circulant matrix, should there not be a way of deriving the formula by diagonalising the determinant through row/column reduction? If one sums over the first column, the common entry in the first column is ##\frac{n(n+1)}{2}##. Then, if one subtracts the ##i##'th row from the ##i+1##'th row, one gets

\begin{vmatrix} \frac{n(n+1)}{2} & 2 & 3 & ... & n \\ 0 & 1 & 1 & ... & 1-n \\ 0 & 1 & 1 & ... & 1 \\ {\vdots}& {\vdots}& {\vdots} & {\vdots} & {\vdots} \\ 0 & 1-n & 1 & ... & 1 \notag \end{vmatrix}

Repeating this procedure with the columns, excluding the first one, yields

\begin{vmatrix} \frac{n(n+1)}{2} & 2 & 1 & ... & 1 \\ 0 & 1 & 0 & ... & -n \\ 0 & 1 & 0 & ... & 0 \\ {\vdots}& {\vdots}& {\vdots} & {\vdots} & {\vdots} \\ 0 & 1-n & n & ... & 0 \notag \end{vmatrix}

One could then expand along the first column, but where would one go from there?

\begin{equation}
\frac{n(n+1)}{2} \begin{vmatrix} 1 & 0 & 0 & ... & -n \\ 1 & 0 & 0 & ... & 0 \\ {\vdots}& {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ 1-n & n & 0 &... & 0 \notag \end{vmatrix} \end{equation}
 
  • #10
schniefen said:
Since it's a circulant matrix, should there not be a way of deriving the formula by diagonalising the determinant through row/column reduction? If one sums over the first column, the common entry in the first column is ##\frac{n(n+1)}{2}##. Then, if one subtracts the ##i##'th row from the ##i+1##'th row, one gets

\begin{vmatrix} \frac{n(n+1)}{2} & 2 & 3 & ... & n \\ 0 & 1 & 1 & ... & 1-n \\ 0 & 1 & 1 & ... & 1 \\ {\vdots}& {\vdots}& {\vdots} & {\vdots} & {\vdots} \\ 0 & 1-n & 1 & ... & 1 \notag \end{vmatrix}
I didn't follow how you got this, but one thing that jumps out at me in terms of pattern recognition is

##\begin{vmatrix} \frac{n(n+1)}{2} & 2 & 3 & ... & n \\ 0 & 1 & 1 & ... & 1-n \\ 0 & 1 & 1 & ... & 1 \\ {\vdots}& {\vdots}& {\vdots} & {\vdots} & {\vdots} \\ 0 & 1-n & 1 & ... & 1 \notag \end{vmatrix} = \begin{vmatrix} \frac{n(n+1)}{2} & * \\ \mathbf 0 & \mathbf B \notag \end{vmatrix} = \frac{n(n+1)}{2} \cdot \det\big(\mathbf B\big)##

if I read it correctly, ##\mathbf B \in \mathbb R^{\text{n-1 x n-1}}## is the ones matrix with ##-n## added along the antidiagonal
i.e. making use of the reflection matrix again, this time in n-1 dimensional space

##\mathbf B = -n\mathbf J_{n-1} + \mathbf 1\mathbf 1^T##
we can use the determinant formula for addition of invertible matrix and rank one update to get this, i.e. in general

##\det\big(\mathbf A +\mathbf c \mathbf d^T\big) = \det\big(\mathbf A \big) \big(1+\mathbf d^T \mathbf A^{-1} \mathbf c\big)##
see e.g. here: https://math.stackexchange.com/ques...rank-one-perturbations-of-invertible-matrices

so (making use of involution in the 5th line and the ones vector as an eigenvector in 6th line)
##\det\Big(\mathbf B\Big)##
## = \det\Big(-n\mathbf J_{n-1} + \mathbf 1\mathbf 1^T\Big) ##
## = \det\Big(-n\mathbf J_{n-1}\Big)\Big(1 + \mathbf 1^T\big(-n\mathbf J_{n-1}\big)^{-1}\mathbf 1\Big)##
## = \det\Big(-n\mathbf J_{n-1}\Big)\Big(1 + \frac{-1}{n}\mathbf 1^T\mathbf J_{n-1}^{-1}\mathbf 1\Big)##
## = \det\Big(-n\mathbf J_{n-1}\Big)\Big(1 + \frac{-1}{n}\mathbf 1^T\big(\mathbf J_{n-1}\mathbf 1\big)\Big)##
## = \det\Big(-n\mathbf J_{n-1}\Big)\Big(1 + \frac{-1}{n}\mathbf 1^T\mathbf 1\Big)##
## = \det\Big(-n\mathbf J_{n-1}\Big)\Big(\frac{n}{n} + \frac{-(n-1)}{n}\Big)##
## = \det\Big(-n\mathbf I_{n-1}\mathbf J_{n-1}\Big)\Big(\frac{1}{n}\Big)##
## = \det\Big(-n\mathbf I_{n-1}\Big)\det\Big(\mathbf J_{n-1}\Big)\Big(\frac{1}{n}\Big)##
## = \frac{1}{n} (-n)^{n-1} \cdot\det\Big(\mathbf J_{n-1}\Big)##
## = \frac{1}{n}n^{n-1} \cdot (-1)^{n-1} \cdot\det\Big(\mathbf J_{n-1}\Big)##
## = n^{n-2} \cdot (-1)^{n-1}\det\Big(\mathbf J_{n-1}\Big) ##
## = n^{n-2} \cdot (-1)^{n-1}(-1)^\frac{(n-1)(n-2)}{2}##

which is right up to a sign, and exhausts my interest.

edit:
a sneakier approach
is to recognize that since the ones vector is both a right eigenvector (shown in earlier post) and a left eigenvector of circulant matrix ##\mathbf S## because
##\binom{n+1}{2}\mathbf 1^T = \binom{n+1}{2}\mathbf 1^T\mathbf J = \mathbf 1^T\mathbf H\mathbf J = \mathbf 1^T\mathbf H\mathbf J^T = \mathbf 1^T\mathbf H\mathbf J^{-1} = \mathbf 1^T\mathbf {S}##

then the scaled ones matrix ##\alpha \mathbf {11}^T## is something we can exploit, with ##\alpha## a slack parameter-- the effect is homogeneous, so when we subtract said matrix, it affects each cell by the same amount, preserving our pattern-- i.e. circulant structure-- and this also plays very nicely with row operations, which we'll get to later.

so a perhaps easier approach is to work with
##\mathbf Z = \mathbf S - \alpha\mathbf {11}^T##
and in particular, setting ##\alpha:=1##
##\mathbf Z = \mathbf S - \mathbf {11}^T##
and
##\det\big(\mathbf Z\big) \lambda_n = \det\big(\mathbf S\big)(\lambda_n-n)##
or
##\det\big(\mathbf Z\big) \frac{\lambda_n}{\lambda_n-n} = \det\big(\mathbf S\big)##
with
##\lambda_n = \binom{n+1}{2}##
this can be interpreted directly since subtracting the ones matrix deflates the dominant eigenvalue by ##\text{trace}\big(\alpha\mathbf{11}^T\big) = \alpha\cdot \text{trace}\big(\mathbf{11}^T\big) = \alpha \cdot n = 1 \cdot n = n## (and doesn't affect the rest of the spectrum), or it can be justified indirectly via the above rank one update formula for determinants.

if we now do row operations and subtract row ##i## from row ##i + 1## in ##\mathbf Z##, for ##i = n-1, n-2, ...,1##

then, e.g. with ##n=6####
\left(\begin{array}{rrrrrr}
5 & 4 & 3 & 2 & 1 & 0 \\
0 & 5 & 4 & 3 & 2 & 1 \\
1 & 0 & 5 & 4 & 3 & 2 \\
2 & 1 & 0 & 5 & 4 & 3 \\
3 & 2 & 1 & 0 & 5 & 4 \\
4 & 3 & 2 & 1 & 0 & 5
\end{array}\right)
\longrightarrow \left(\begin{array}{rrrrrr}
5 & 4 & 3 & 2 & 1 & 0 \\
-5 & 1 & 1 & 1 & 1 & 1 \\
1 & -5 & 1 & 1 & 1 & 1 \\
1 & 1 & -5 & 1 & 1 & 1 \\
1 & 1 & 1 & -5 & 1 & 1 \\
1 & 1 & 1 & 1 & -5 & 1
\end{array}\right) = -n\mathbf C +\mathbf {11}^T ##

where ##\mathbf C## is a transposed companion matrix (up to graph isomorphism -- i.e. consider ##\mathbf {JCJ}^{-1} = \mathbf {JCJ}##), or if the reader prefers, it is the inverse of (the transpose of) a different companion matrix. This is nice because Companion matrices have a determinant located in a single component, and have a well known inverse. It is also yet another tie-in to OP's bonus question.

- - - - -
aside: if we re-visit our slack parameter ##\alpha##, this row reduced form in general, still with n= 6, reads
##\left(\begin{array}{rrrrrr}
6 -\alpha & 5 -\alpha & 4-\alpha & 3- \alpha & 2-\alpha & 1-\alpha \\
-5 & 1 & 1 & 1 & 1 & 1 \\
1 & -5 & 1 & 1 & 1 & 1 \\
1 & 1 & -5 & 1 & 1 & 1 \\
1 & 1 & 1 & -5 & 1 & 1 \\
1 & 1 & 1 & 1 & -5 & 1
\end{array}\right)##

the problem with selecting ##\alpha := 0## is that the implied companion matrix has a zero in its top right corner, and said matrix is singular if and only if this is the case. So for direct computation, any ##\alpha \neq 0## works; the rank one update determinant formula requires that the (companion) matrix we are updating is non-singular. However, continuity of the determinant implies the result and we could e.g. use it with small ##\alpha##, looking at ##\alpha \to 0##. In any case selecting ##\alpha:=1## seemed simplest so that's what I did.
- - - - -

In this case
##\mathbf C =
\left(\begin{array}{rrrrrr}
-\frac{2}{3} & -\frac{1}{2} & -\frac{1}{3} & -\frac{1}{6} & 0 & \frac{1}{6} \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0
\end{array}\right)
##
where the determinant is lurking in the top right corner, and paying attention to the sign function, it is rescaled by -1 if the dimension is even, so

##\det\big(\mathbf C\big) = (-1)^{n+1}\cdot \frac{1}{n}##

and
##\mathbf C^{-1} =
\left(\begin{array}{rrrrrr}
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
6 & 4 & 3 & 2 & 1 & 0
\end{array}\right)##

so when applying the rank one update formula to calculate ##\det\big(\mathbf Z\big) = \det\big( -n\mathbf C +\mathbf {11}^T\big)## we can see the familiar progression
##\mathbf 1^T\mathbf C^{-1}\mathbf 1 = n + (n-1) + ... + 2+1 = \binom{n+1}{2}##

plugging this in

##\det\big(\mathbf Z\big) ##
##= \det\big( -n\mathbf C +\mathbf {11}^T\big) ##
##= \det\big(-n\mathbf C\big)\cdot \Big\{\big(1 +\frac{-1}{n}\mathbf 1^T \mathbf C^{-1}\mathbf 1\big)\Big\}##
##= n^n \cdot (-1)^n\cdot \det\big(\mathbf C\big)\cdot \Big\{\big(\frac{n}{n} +\frac{-1}{n}\binom{n+1}{2}\big)\Big\}##
##= n^n \cdot \big( (-1)^n\cdot (-1)^{n+1}\big) \frac{1}{n}\cdot \Big\{ n^{-1}(-1)\cdot \big(-n +\binom{n+1}{2}\big)\Big\}##
##= n^n \cdot \big( -1\big) \cdot n^{-2}\cdot (-1) \big(\binom{n+1}{2}-n\big)##
##= n^{n-2}\big(\binom{n+1}{2}-n\big)##
as required, i.e.

##\det\big(\mathbf Z\big) \frac{\lambda_n}{\lambda_n-n} ##
##= \det\big(\mathbf Z\big) \frac{\binom{n+1}{2}}{\binom{n+1}{2}-n} ##
##= n^{n-2}\big(\binom{n+1}{2}-n\big)\cdot \frac{\binom{n+1}{2}}{\binom{n+1}{2}-n} ##
##= n^{n-2} \cdot \binom{n+1}{2}##
## = \det\big(\mathbf S\big)##
 
Last edited:
  • Like
Likes schniefen
  • #11
ok a 3rd and final derivation, this one the simplest, just using column and row operations.

idea: take circulant matrix ##\mathbf S## and

##\mathbf S \longrightarrow_\text{column operation} \longrightarrow_\text{row operation 1} \longrightarrow_\text{row operation 2} \text{= very easy sparse matrix}##

this is holds for arbitrary ##n## but is shown for ##n=6## for simplicity of typing.

##\mathbf S = \left(\begin{array}{rrrrrr}
6 & 5 & 4 & 3 & 2 & 1 \\
1 & 6 & 5 & 4 & 3 & 2 \\
2 & 1 & 6 & 5 & 4 & 3 \\
3 & 2 & 1 & 6 & 5 & 4 \\
4 & 3 & 2 & 1 & 6 & 5 \\
5 & 4 & 3 & 2 & 1 & 6
\end{array}\right)##

##\mathbf S \longrightarrow_\text{column operation}= \mathbf S^{'} =
\left(\begin{array}{rrrrrr}
21 & 5 & 4 & 3 & 2 & 1 \\
21 & 6 & 5 & 4 & 3 & 2 \\
21 & 1 & 6 & 5 & 4 & 3 \\
21 & 2 & 1 & 6 & 5 & 4 \\
21 & 3 & 2 & 1 & 6 & 5 \\
21 & 4 & 3 & 2 & 1 & 6
\end{array}\right)
##
where the column operation is, as suggested by OP, to add each column ##j## to the first column, for ##j = 2, 3, ..., n##. Notice that ##\binom{n+1}{2} = 21## when n = 6.

next up

##\mathbf S^{'} \longrightarrow_\text{row operation 1} = \mathbf S^{''} =
\left(\begin{array}{rrrrrr}
21 & 5 & 4 & 3 & 2 & 1 \\
0 & 1 & 1 & 1 & 1 & 1 \\
0 & -5 & 1 & 1 & 1 & 1 \\
0 & 1 & -5 & 1 & 1 & 1 \\
0 & 1 & 1 & -5 & 1 & 1 \\
0 & 1 & 1 & 1 & -5 & 1
\end{array}\right)##

where the row operation is to subtract row ##i## from row ##i+1##, for ##i = n-1, n-2,..., 1##
(i.e. working backwards). This is 'kind of' what OP suggested in post 9, though a bit different in the result.

finally
##\mathbf S^{''}\longrightarrow_\text{row operation 2} =\mathbf S^{'''}=
\left(\begin{array}{rrrrrr}
21 & 5 & 4 & 3 & 2 & 1 \\
0 & 1 & 1 & 1 & 1 & 1 \\
0 & -6 & 0 & 0 & 0 & 0 \\
0 & 0 & -6 & 0 & 0 & 0 \\
0 & 0 & 0 & -6 & 0 & 0 \\
0 & 0 & 0 & 0 & -6 & 0
\end{array}\right)##
where the row operation is subtract row ##2## from row ##i## for ##i=3,4,...,n##

This gives
##\det\big(\mathbf S\big)##
## = \det\big(\mathbf S^{'''}\big) ##
##= \binom{n+1}{2} \cdot\det\Big(\left(\begin{array}{rrrrrr}
1 & 1 & 1 & 1 & 1 \\
-6 & 0 & 0 & 0 & 0 \\
0 & -6 & 0 & 0 & 0 \\
0 & 0 & -6 & 0 & 0 \\
0 & 0 & 0 & -6 & 0
\end{array}\right)\Big)##
## = \binom{n+1}{2}\cdot \big( (-n)^{n-2}\cdot (-1)^{(n-1) + 1} \big)##
##= \binom{n+1}{2}\cdot n^{n-2} \cdot (-1)^{2n-2}##
##= \binom{n+1}{2}\cdot n^{n-2}##

as desired
 
Last edited:
  • Like
Likes schniefen
  • #12
Nice. My calculations were incorrect in #9. Would your last derivation hold for the matrix in #1, where the common entry is along the anti-diagonal? Would it reveal the missing ##(-1)^{n(n-1)/2}## term?
 
  • #13
schniefen said:
Nice. My calculations were incorrect in #9. Would your last derivation hold for the matrix in #1, where the common entry is along the anti-diagonal? Would it reveal the missing ##(-1)^{n(n-1)/2}## term?
I'm not really sure what this means-- I guess it would, but it seems tedious and unnecesarry to verify.
I've strongly suggested that you should map from Hankel ##\to## Toeplitz for this problem and to consider

##\det\big(\mathbf H\big) = \det\big(\mathbf {SJ}\big) = \det\big(\mathbf S\big)\det\big(\mathbf J\big) = \det\big(\mathbf S\big) \cdot (-1)^\binom{n}{2}##

and as far as I'm concerned -- the sooner we can isolate the alternating +/- 1, the better; the above decomposition does exactly that.

I gave a bunch of hints for calculating
##\det\big(\mathbf J\big)## in post 5

StoneTemplePython said:
...
Note determinants multiply and ##\det\big(\mathbf J\big)## is easy to calculate
(a) as a hint it's involutive so you could just count the number of -1 eigenvalues which is implied by taking the trace... and you know that ##\text{trace}\big(\mathbf J\big) = 0## when n is even and ##\text{trace}\big(\mathbf J\big) = 1## when n is odd...
(b) just view ##\mathbf J## as a composition of disjoint pairwise swaps (transpositions), each of which has a permutation matrix with determinant -1 associated with it, so just count the number of these swaps...
...

circling back, and using earlier hints
##\det\big(\mathbf J\big) = (-1)^{\frac{n}{2}}## for even n
##\det\big(\mathbf J\big) = (-1)^{\frac{n-1}{2}}## for odd n

but this is the same as the official answer... i.e. ##(-1)^\frac{n(n-1)}{2}## is just a symmetrized version of this.

One way to check is, for even n:
##\frac{n}{2}\%2 = \frac{n}{2}(n-1)\%2##
because n-1 is odd and equal to 1, mod 2

and for odd n
##\frac{n-1}{2}\%2= \frac{(n-1)}{2}n\%2##
because ##n## is odd and equal to 1, mod 2

I 've already tackled this problem 3 different ways. You are welcome to try to apply that 3rd derivation technique to the original Hankel matrix.

edit:
if I naively apply my script and don't multiply H by J first then I have
##\mathbf H ##
##=
\left(\begin{array}{rrrrrr}
1 & 2 & 3 & 4 & 5 & 6 \\
2 & 3 & 4 & 5 & 6 & 1 \\
3 & 4 & 5 & 6 & 1 & 2 \\
4 & 5 & 6 & 1 & 2 & 3 \\
5 & 6 & 1 & 2 & 3 & 4 \\
6 & 1 & 2 & 3 & 4 & 5
\end{array}\right)
##
##\longrightarrow
\left(\begin{array}{rrrrrr}
21 & 2 & 3 & 4 & 5 & 6 \\
21 & 3 & 4 & 5 & 6 & 1 \\
21 & 4 & 5 & 6 & 1 & 2 \\
21 & 5 & 6 & 1 & 2 & 3 \\
21 & 6 & 1 & 2 & 3 & 4 \\
21 & 1 & 2 & 3 & 4 & 5
\end{array}\right)##
##\longrightarrow
\left(\begin{array}{rrrrrr}
21 & 2 & 3 & 4 & 5 & 6 \\
0 & 1 & 1 & 1 & 1 & -5 \\
0 & 1 & 1 & 1 & -5 & 1 \\
0 & 1 & 1 & -5 & 1 & 1 \\
0 & 1 & -5 & 1 & 1 & 1 \\
0 & -5 & 1 & 1 & 1 & 1
\end{array}\right)
##
## \longrightarrow
\left(\begin{array}{rrrrrr}
21 & 2 & 3 & 4 & 5 & 6 \\
0 & 1 & 1 & 1 & 1 & -5 \\
0 & 0 & 0 & 0 & -6 & 6 \\
0 & 0 & 0 & -6 & 0 & 6 \\
0 & 0 & -6 & 0 & 0 & 6 \\
0 & -6 & 0 & 0 & 0 & 6
\end{array}\right)
##
which should be enough to answer OP's question. That final matrix is a lot less pleasant, but creating one more row operation, i.e. adding ##\frac{1}{n}\cdot \text{row i}## to row 2, for i = 3, 4,..., n, gives

## \longrightarrow \left(\begin{array}{rrrrrr}
21 & 2 & 3 & 4 & 5 & 6 \\
0 & 0 & 0 & 0 & 0 & -1 \\
0 & 0 & 0 & 0 & -6 & 6 \\
0 & 0 & 0 & -6 & 0 & 6 \\
0 & 0 & -6 & 0 & 0 & 6 \\
0 & -6 & 0 & 0 & 0 & 6
\end{array}\right)##
which is reasonable. I think it's harder to uncover the role of the alternating sign in this setup, though
##
\left(\begin{array}{rrrrrr}
21 & 2 & 3 & 4 & 5 & 6 \\
0 & 0 & 0 & 0 & 0 & -1 \\
0 & 0 & 0 & 0 & -6 & 6 \\
0 & 0 & 0 & -6 & 0 & 6 \\
0 & 0 & -6 & 0 & 0 & 6 \\
0 & -6 & 0 & 0 & 0 & 6
\end{array}\right)
=
\left(\begin{array}{rrrrrr}
6 & 5 & 4 & 3 & 2 & 21 \\
-1 & 0 & 0 & 0 & 0 & 0 \\
6 & -6 & 0 & 0 & 0 & 0 \\
6 & 0 & -6 & 0 & 0 & 0 \\
6 & 0 & 0 & -6 & 0 & 0 \\
6 & 0 & 0 & 0 & -6 & 0
\end{array}\right)\cdot \mathbf J##
is easy to work with, having determinant
##\big(\binom{n+1}{2}\cdot (-1)^{n+1}\big)\cdot (-1) \cdot (-n)^{n-2}\cdot \det\big(\mathbf J\big) = \binom{n+1}{2} \cdot n^{n-2} \cdot \det\big(\mathbf J\big)##

one way or another the reflection matrix ends up being very useful in this problem.
 
Last edited:

1. What is a special determinant?

A special determinant is a type of determinant that follows a specific pattern or formula. It is often used in mathematics and physics to solve systems of equations or to find the area or volume of a shape.

2. Why is it important to prove a formula for special determinants?

Proving a formula for special determinants allows us to have a deeper understanding and insight into their properties and applications. It also provides a solid mathematical foundation for using these determinants in various calculations and equations.

3. How do you prove a formula for special determinants?

The process of proving a formula for special determinants involves using mathematical principles and techniques such as algebra, matrix operations, and properties of determinants. It may also require the use of proofs and logical reasoning.

4. Are there different formulas for different types of special determinants?

Yes, there are different formulas for different types of special determinants. Some common types include the Vandermonde determinant, Hadamard determinant, and Cauchy determinant. Each type has its own specific formula that can be proven and used in various mathematical applications.

5. Can a formula for special determinants be used for any size matrix?

No, the formula for special determinants may vary depending on the size of the matrix. Some formulas may only apply to square matrices, while others may work for rectangular matrices as well. It is important to carefully consider the dimensions of the matrix when using a formula for special determinants.

Similar threads

  • Linear and Abstract Algebra
Replies
4
Views
983
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
969
  • Linear and Abstract Algebra
Replies
6
Views
2K
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
737
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
3K
  • Linear and Abstract Algebra
Replies
5
Views
2K
Back
Top