MHB Proving the Formula for Determinants by Induction

mathmari
Gold Member
MHB
Messages
4,984
Reaction score
7
Hey!

Let $\mathbb{K}$ be a field and let $1\leq n\in \mathbb{N}$. Let $a_0, \ldots , a_{n-1}\in \mathbb{K}$ and let $m_n\in M_n(\mathbb{K})$ be given by \begin{equation*}m_n:=\begin{pmatrix}0 & 0 & \ldots & 0 & -a_0 \\ 1 & \ddots & \ddots & \vdots & \vdots \\ 0 & \ddots & \ddots & 0 & \vdots \\ \vdots & \ddots & \ddots & 0 & \vdots \\ 0 & \ldots & 0 & 1 & -a_{n-1}\end{pmatrix}\end{equation*}

I want to show that for $\lambda\in \mathbb{K}$ it holds that $\displaystyle{\det (\lambda u_n-m_n) =\lambda^n+\sum_{i=0}^{n-1}a_i \lambda^i}$. First I applied the Laplace theorem to show that the formula bust be this one:
\begin{equation*}\det (\lambda u_n-m_n)=\begin{vmatrix}\lambda & 0 & \ldots & 0 & a_0 \\ -1 & \ddots & \ddots & \vdots & \vdots \\ 0 & \ddots & \ddots & 0 & \vdots \\ \vdots & \ddots & \ddots & \lambda & \vdots \\ 0 & \ldots & 0 & -1 & \lambda+a_{n-1}\end{vmatrix}\end{equation*}
Laplace as for the first row:
\begin{equation*}\det (\lambda u_n-m_n)=\begin{vmatrix}\lambda & 0 & \ldots & 0 & a_0 \\ -1 & \ddots & \ddots & \vdots & \vdots \\ 0 & \ddots & \ddots & 0 & \vdots \\ \vdots & \ddots & \ddots & \lambda & \vdots \\ 0 & \ldots & 0 & -1 & \lambda+a_{n-1}\end{vmatrix}=\lambda \begin{vmatrix} \lambda & 0 & \vdots & a_1 \\ -1 & \ddots & 0 & \vdots \\ \ddots & \ddots & \lambda & \vdots \\ \ldots & 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+(-1)^{n+1}a_0\begin{vmatrix}-1 & \lambda & \ddots & \vdots \\ 0 & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & \lambda \\ 0 & \ldots & 0 & -1 \end{vmatrix}\end{equation*}
The matrix $\begin{pmatrix}-1 & \lambda & \ddots & \vdots \\ 0 & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & \lambda \\ 0 & \ldots & 0 & -1 \end{pmatrix}$ is a triangular matrix, so the determiannt is the product of the diagonal elements, i.e. $(-1)^{n-1}$.

So we get \begin{align*}\det (\lambda u_n-m_n)&=\begin{vmatrix}\lambda & 0 & \ldots & 0 & a_0 \\ -1 & \ddots & \ddots & \vdots & \vdots \\ 0 & \ddots & \ddots & 0 & \vdots \\ \vdots & \ddots & \ddots & \lambda & \vdots \\ 0 & \ldots & 0 & -1 & \lambda+a_{n-1}\end{vmatrix} =\lambda \begin{vmatrix} \lambda & 0 & \vdots & a_1 \\ -1 & \ddots & 0 & \vdots \\ \ddots & \ddots & \lambda & \vdots \\ \ldots & 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+(-1)^{n+1}a_0(-1)^{n-1}\\ & =\lambda \begin{vmatrix} \lambda & 0 & \vdots & a_1 \\ -1 & \ddots & 0 & \vdots \\ \ddots & \ddots & \lambda & \vdots \\ \ldots & 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+(-1)^{2n}a_0 =\lambda \begin{vmatrix} \lambda & 0 & \vdots & a_1 \\ -1 & \ddots & 0 & \vdots \\ \ddots & \ddots & \lambda & \vdots \\ \ldots & 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+a_0\end{align*}
We expand again as for the first row:
\begin{align*}\det (\lambda u_n-m_n)&=\lambda \left (\lambda \begin{vmatrix} \lambda & 0 & a_2 \\ \ddots & \lambda & \vdots \\ 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+(-1)^na_1\begin{vmatrix} -1 & \ddots & 0 \\ \ddots & \ddots & \lambda \\ \ldots & 0 & -1 \end{vmatrix}\right )+a_0\\ & =\lambda \left (\lambda \begin{vmatrix} \lambda & 0 & a_2 \\ \ddots & \lambda & \vdots \\ 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+(-1)^na_1(-1)^{n-2}\right )+a_0 \\ & =\lambda \left (\lambda \begin{vmatrix} \lambda & 0 & a_2 \\ \ddots & \lambda & \vdots \\ 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+(-1)^{2n-2}a_1\right )+a_0 \\ & =\lambda^2 \begin{vmatrix} \lambda & 0 & a_2 \\ \ddots & \lambda & \vdots \\ 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+ a_1 \lambda+a_0\end{align*}
We expand again as for the first row:
\begin{align*}\det (\lambda u_n-m_n)&=\lambda^2 \left (\lambda \begin{vmatrix} \lambda & 0 & a_3 \\ \ddots & \lambda & \vdots \\ 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+(-1)^{n-1}a_2\begin{vmatrix} -1 & \ddots & 0 \\ \ddots & \ddots & \lambda \\ \ldots & 0 & -1 \end{vmatrix}\right )+a_1\lambda +a_0\\ & =\lambda^2 \left (\lambda \begin{vmatrix} \lambda & 0 & a_3 \\ \ddots & \lambda & \vdots \\ 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+(-1)^{n-1}a_2(-1)^{n-3}\right )+a_1\lambda +a_0 \\ & =\lambda^2 \left (\lambda \begin{vmatrix} \lambda & 0 & a_3 \\ \ddots & \lambda & \vdots \\ 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+(-1)^{2n-4}a_2\right )+a_1\lambda +a_0 \\ & =\lambda^3 \begin{vmatrix} \lambda & 0 & a_3 \\ \ddots & \lambda & \vdots \\ 0 & -1 & \lambda+a_{n-1}\end{vmatrix}+a_2\lambda^2+a_1\lambda +a_0\end{align*}
If we aply the Laplace explansion several times, we see that the formula is:
\begin{equation*}\det (\lambda u_n-m_n) =\lambda^n+\sum_{i=0}^{n-1}a_i \lambda^i\end{equation*}

Now we show that this formula is true using induction. (Is the way I do that correct?)

At the base case we have $n=1$, or not? But how is the matrix for $n=1$ ? Do we have that \begin{equation*}\det (\lambda u_1-m_1)=\begin{vmatrix}\lambda \end{vmatrix}=\lambda\end{equation*} ? But then the formula wouldn't hold, would it?

:unsure:
 
Physics news on Phys.org
mathmari said:
At the base case we have $n=1$, or not? But how is the matrix for $n=1$ ? Do we have that \begin{equation*}\det (\lambda u_1-m_1)=\begin{vmatrix}\lambda \end{vmatrix}=\lambda\end{equation*} ? But then the formula wouldn't hold, would it?
The question says "let $1\leq n\in \mathbb{N}$", but the matrix $m_n$ does not appear to be defined when $n=1$. In fact, $a_1$ has only one entry, which has to be equal to both $0$ and $-a_0$. I think that the question should have said "let $2\leq n\in \mathbb{N}$" and that you should take the base case to be $n=2$.
 
Opalg said:
The question says "let $1\leq n\in \mathbb{N}$", but the matrix $m_n$ does not appear to be defined when $n=1$. In fact, $a_1$ has only one entry, which has to be equal to both $0$ and $-a_0$. I think that the question should have said "let $2\leq n\in \mathbb{N}$" and that you should take the base case to be $n=2$.

Ahh ok! So we have the following:

Base Case: For $n=2$ we have that \begin{equation*}\det (\lambda u_2-m_2)=\begin{vmatrix}\lambda & a_0\\ -1 & \lambda +a_1 \end{vmatrix}=\lambda(\lambda+a_1)-(-1)a_0=\lambda^2+a_1 \lambda+a_0=\lambda^2+\sum_{i=0}^{1}a_i \lambda^i\end{equation*} and so the formula holds. .

Inductive hypothesis: We assume that the formula holds for $n=k$, i.e. \begin{equation*}\det (\lambda u_k-m_k) =\lambda^k+\sum_{i=0}^{k-1}a_i \lambda^i, \ \ \ \ \ (IH)\end{equation*}

Inductive step: We want to show that the formula hodls also for $n=k+1$, i.e. \begin{equation*}\det (\lambda u_{k+1}-m_{k+1}) =\lambda^{k+1}+\sum_{i=0}^{(k+1)-1}a_i=\lambda^{k+1}+\sum_{i=0}^{k}a_i \lambda^i\end{equation*}
We have that
\begin{equation*}\det (\lambda u_{k+1}-m_{k+1})=\begin{vmatrix}\lambda & 0 & \ldots & 0 & a_0 \\ -1 & \ddots & \ddots & \vdots & \vdots \\ 0 & \ddots & \ddots & 0 & \vdots \\ \vdots & \ddots & \ddots & \lambda & \vdots \\ 0 & \ldots & 0 & -1 & \lambda+a_{k}\end{vmatrix}\end{equation*}
Do we expand as for the first row? Then we would get the following:
\begin{equation*}\det (\lambda u_{k+1}-m_{k+1})=\begin{vmatrix}\lambda & 0 & \ldots & 0 & a_0 \\ -1 & \ddots & \ddots & \vdots & \vdots \\ 0 & \ddots & \ddots & 0 & \vdots \\ \vdots & \ddots & \ddots & \lambda & \vdots \\ 0 & \ldots & 0 & -1 & \lambda+a_{k}\end{vmatrix}=\lambda \begin{vmatrix} \lambda & 0 & \vdots & a_1 \\ -1 & \ddots & 0 & \vdots \\ \ddots & \ddots & \lambda & \vdots \\ \ldots & 0 & -1 & \lambda+a_{k}\end{vmatrix}+(-1)^{k+2}a_0\begin{vmatrix}-1 & \lambda & \ddots & \vdots \\ 0 & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & \lambda \\ 0 & \ldots & 0 & -1 \end{vmatrix}\end{equation*}

Now the matrix $\lambda \begin{vmatrix} \lambda & 0 & \vdots & a_1 \\ -1 & \ddots & 0 & \vdots \\ \ddots & \ddots & \lambda & \vdots \\ \ldots & 0 & -1 & \lambda+a_{k}\end{vmatrix}$ is a $k\times k$ matrix, but is it of the form$\lambda u_{k}-m_{k}$. so that we can use $(IH)$ ? I ask because at the first row we have now $a_1$ instead of $a_0$. Or do we have to amend the formula of (IH) accordingly?

:unsure:
 
The inductive hypothesis should say that the formula holds for every element $(a_0,\ldots,a_{n-1})\in\Bbb{K}^n$. In the inductive argument you start with a $(k+1)\times(k+1)$ matrix whose last column is $(a_0,\ldots,a_{k})\in\Bbb{K}^{k+1}$, and you end up with a $k\times k$ matrix whose last column is $(a_1,\ldots,a_k)$, which is an element of $\Bbb{K}^k$. That is in accordance with the inductive hypothesis. It doesn't matter how the coordinates of that element are labelled.
 
Opalg said:
The inductive hypothesis should say that the formula holds for every element $(a_0,\ldots,a_{n-1})\in\Bbb{K}^n$. In the inductive argument you start with a $(k+1)\times(k+1)$ matrix whose last column is $(a_0,\ldots,a_{k})\in\Bbb{K}^{k+1}$, and you end up with a $k\times k$ matrix whose last column is $(a_1,\ldots,a_k)$, which is an element of $\Bbb{K}^k$. That is in accordance with the inductive hypothesis. It doesn't matter how the coordinates of that element are labelled.

Ah ok! So at the inductive step we have the following:
\begin{equation*}\det (\lambda u_{k+1}-m_{k+1})=\begin{vmatrix}\lambda & 0 & \ldots & 0 & a_0 \\ -1 & \ddots & \ddots & \vdots & \vdots \\ 0 & \ddots & \ddots & 0 & \vdots \\ \vdots & \ddots & \ddots & \lambda & \vdots \\ 0 & \ldots & 0 & -1 & \lambda+a_{k}\end{vmatrix}\end{equation*}
We expand as for the first row:
\begin{equation*}\det (\lambda u_{k+1}-m_{k+1})=\begin{vmatrix}\lambda & 0 & \ldots & 0 & a_0 \\ -1 & \ddots & \ddots & \vdots & \vdots \\ 0 & \ddots & \ddots & 0 & \vdots \\ \vdots & \ddots & \ddots & \lambda & \vdots \\ 0 & \ldots & 0 & -1 & \lambda+a_{k}\end{vmatrix}=\lambda \begin{vmatrix} \lambda & 0 & \vdots & a_1 \\ -1 & \ddots & 0 & \vdots \\ \ddots & \ddots & \lambda & \vdots \\ \ldots & 0 & -1 & \lambda+a_{k}\end{vmatrix}+(-1)^{k+2}a_0\begin{vmatrix}-1 & \lambda & \ddots & \vdots \\ 0 & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & \lambda \\ 0 & \ldots & 0 & -1 \end{vmatrix}\end{equation*}
The matrix $\lambda \begin{vmatrix} \lambda & 0 & \vdots & a_1 \\ -1 & \ddots & 0 & \vdots \\ \ddots & \ddots & \lambda & \vdots \\ \ldots & 0 & -1 & \lambda+a_{k}\end{vmatrix}$ is a $k\times k$ matrix of the form $\lambda u_{k}-m_{k}$. From the $(IH)$ we get that determiannt is equal to $\displaystyle{\lambda^k+\sum_{i=0}^{k-1}a_{i+1} \lambda^i}$.

The matrix $\begin{vmatrix}-1 & \lambda & \ddots & \vdots \\ 0 & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & \lambda \\ 0 & \ldots & 0 & -1 \end{vmatrix}$ is triangular so the determinant is equal to the product of the diagnal elements, i.e. $(-1)^k$.

So we get:
\begin{align*}\det (\lambda u_{k+1}-m_{k+1})&=\lambda \begin{vmatrix} \lambda & 0 & \vdots & a_1 \\ -1 & \ddots & 0 & \vdots \\ \ddots & \ddots & \lambda & \vdots \\ \ldots & 0 & -1 & \lambda+a_{k}\end{vmatrix}+(-1)^{k+2}a_0\begin{vmatrix}-1 & \lambda & \ddots & \vdots \\ 0 & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & \lambda \\ 0 & \ldots & 0 & -1 \end{vmatrix} \\ & =\lambda \left (\lambda^k+\sum_{i=0}^{k-1}a_{i+1} \lambda^i\right )+(-1)^{k+2}a_0(-1)^k=\lambda^{k+1}+\lambda\sum_{i=0}^{k-1}a_{i+1} \lambda^i+(-1)^{2k+2}a_0\\ & =\lambda^{k+1}+\lambda\sum_{i=0}^{k-1}a_{i+1} \lambda^i+a_0=\lambda^{k+1}+\sum_{i=0}^{k-1}a_{i+1} \lambda^{i+1}+a_0=\lambda^{k+1}+\sum_{i=1}^{k}a_{i} \lambda^{i}+a_0 \\ & =\lambda^{k+1}+\sum_{i=0}^{k}a_{i} \lambda^{i} \end{align*} Is everything correct? Could we improve something? :unsure:
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Back
Top