MHB How can the Wronskian be used to determine linear independence?

Guest2
Messages
192
Reaction score
0
I'm asked to check whether $\left\{1, e^{ax}, e^{bx}\right\}$ is linearly independent over $\mathbb{R}$ if $a \ne b$, and compute the dimension of the subspace spanned by it. Google said the easiest way to do this is something called the Wronskian. Is this how you do it? The matrix is:

$ \begin{aligned} \begin{bmatrix}1 & e^{ax} & e^{bx} \\ 0 & a e^{ax} & be^{bx} \\ 0 & a^2 e^{ax} & b^2e^{bx}\end{bmatrix} =\begin{bmatrix}1 & e^{ax} & e^{bx} \\ 0 & a e^{ax} & be^{bx} \\ 0 & 0 & b^2e^{bx}-abe^{bx}\end{bmatrix}\end{aligned}$

Which is in the upper triangular form, therefore $\mathcal{W}(1, e^{ax}, e^{bx}) =ae^{ax}(b^2e^{bx}-ab e^{bx})$ and

$ae^{ax}(b^2e^{bx}-ab e^{bx}) =0 \implies a=0, ~b=0, ~ a = b$ Thus $\left\{1, e^{ax}, e^{bx}\right\}$ is linearly independent if $a \ne b$.

The subspace spanned by $\left\{1, e^{ax}, e^{bx}\right\}$ is $l(x) = \left\{\lambda_1 +\lambda_2 e^{ax}+\lambda_3 e^{bx}: \lambda_1, \lambda_2, \lambda_3 \in \mathbb{R}\right\}$

It's a basis for this subspace since it's linearly independent and it spans it. So $\text{dim}(l(x)) = 3$.
 
Last edited:
Physics news on Phys.org
Hi Guest,

There should be another condition, namely, $a$ and $b$ are nonzero. Otherwise, the set $\{1,e^{ax},e^{bx}\}$ will contain two of the same elements (the element $1$), which means that the set would be linearly dependent.

Guest said:
The matrix is:

$ \begin{aligned} \begin{bmatrix}1 & e^{ax} & e^{bx} \\ 0 & a e^{ax} & be^{bx} \\ 0 & a^2 e^{ax} & b^2e^{bx}\end{bmatrix} =\begin{bmatrix}1 & e^{ax} & e^{bx} \\ 0 & a e^{ax} & be^{bx} \\ 0 & 0 & b^2e^{bx}-abe^{bx}\end{bmatrix}\end{aligned}$

Did you mean determinant? Even so, why are the two equal?

To prove that $\{1,e^{ax}, e^{bx}\}$ is a linearly independent set of functions, I'll suppose there is a linear dependence relation

$$c_1 + c_2 e^{ax} + c_3 e^{bx} = 0\tag{*}$$

where $c_1,c_2,c_3\in \Bbb R$, and show that $c_1 = c_2 = c_3 = 0$. Evaluating at $x = 0$ gives

$$c_1 + c_2 + c_3 = 0.$$

Taking the derivative of $(*)$ with respect to $x$ and evaluating at $x = 0$, we get

$$ac_2 + bc_3 = 0,$$

or $ac_2 = -bc_3$. Finally, taking the second derivative of $(*)$ with respect to $x$ and evaluating at $x = 0$, we obtain

$$a^2 c_2 + b^2 c_3 = 0,$$

that is, $a^2 c_2 = -b^2 c_3$. Therefore

$$a^2 c_2 = -b^2 c_3 = b(-bc_3) = b(ac_2) = abc_2.$$

So $a(a - b)c_2 = (a^2 - ab)c_2 = 0$. Since $a\neq 0$ and $a\neq b$, we must have $c_2 = 0$. Now $bc_3 = -ac_2 = 0$, so as $b\neq 0$ we have $c_3 = 0$. Finally, $0 = c_1 + c_2 + c_3 = c_1 + 0 + 0 = c_1$. We have now shown that $c_1 = c_2 = c_3 = 0$.
It's a basis for this subspace since it's linearly independent and it spans it. So $\text{dim}(l(x)) = 3$.

That's right.
 
Hi, Euge. Thanks for such a nice way of doing this.

Euge said:
Did you mean determinant? Even so, why are the two equal?
I meant the matrix. I should have used $\to$, not $=$. I row reduced the matrix (basically subtracted $a$ times row $2$ from row $3$) to echelon form so that I could extract the determinant as product of entries of the diagonal: $1 \cdot ae^{ax} \cdot (b^2e^{bx}-ab e^{bx})$ and this is only zero when $a=0, b = 0$ (which I missed earlier), and when $a=b$. My book failed to mention the condition that $a,b$ are nonzero.
 
Last edited:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Back
Top