How can the Wronskian be used to determine linear independence?

Click For Summary
SUMMARY

The Wronskian is used to determine the linear independence of the set $\{1, e^{ax}, e^{bx}\}$ over $\mathbb{R}$ when $a \neq b$. The Wronskian is calculated as $\mathcal{W}(1, e^{ax}, e^{bx}) = ae^{ax}(b^2e^{bx} - abe^{bx})$. This expression is non-zero under the conditions that both $a$ and $b$ are non-zero and distinct, confirming that the set is linearly independent and spans a three-dimensional subspace. The dimension of the subspace spanned by this set is 3.

PREREQUISITES
  • Understanding of the Wronskian determinant
  • Knowledge of linear independence in vector spaces
  • Familiarity with exponential functions and their derivatives
  • Basic matrix operations and row reduction techniques
NEXT STEPS
  • Study the properties of the Wronskian in different contexts
  • Learn about linear transformations and their applications in linear algebra
  • Explore the implications of linear independence in function spaces
  • Investigate the role of determinants in solving systems of linear equations
USEFUL FOR

Mathematicians, students studying linear algebra, and educators teaching concepts of linear independence and differential equations will benefit from this discussion.

Guest2
Messages
192
Reaction score
0
I'm asked to check whether $\left\{1, e^{ax}, e^{bx}\right\}$ is linearly independent over $\mathbb{R}$ if $a \ne b$, and compute the dimension of the subspace spanned by it. Google said the easiest way to do this is something called the Wronskian. Is this how you do it? The matrix is:

$ \begin{aligned} \begin{bmatrix}1 & e^{ax} & e^{bx} \\ 0 & a e^{ax} & be^{bx} \\ 0 & a^2 e^{ax} & b^2e^{bx}\end{bmatrix} =\begin{bmatrix}1 & e^{ax} & e^{bx} \\ 0 & a e^{ax} & be^{bx} \\ 0 & 0 & b^2e^{bx}-abe^{bx}\end{bmatrix}\end{aligned}$

Which is in the upper triangular form, therefore $\mathcal{W}(1, e^{ax}, e^{bx}) =ae^{ax}(b^2e^{bx}-ab e^{bx})$ and

$ae^{ax}(b^2e^{bx}-ab e^{bx}) =0 \implies a=0, ~b=0, ~ a = b$ Thus $\left\{1, e^{ax}, e^{bx}\right\}$ is linearly independent if $a \ne b$.

The subspace spanned by $\left\{1, e^{ax}, e^{bx}\right\}$ is $l(x) = \left\{\lambda_1 +\lambda_2 e^{ax}+\lambda_3 e^{bx}: \lambda_1, \lambda_2, \lambda_3 \in \mathbb{R}\right\}$

It's a basis for this subspace since it's linearly independent and it spans it. So $\text{dim}(l(x)) = 3$.
 
Last edited:
Physics news on Phys.org
Hi Guest,

There should be another condition, namely, $a$ and $b$ are nonzero. Otherwise, the set $\{1,e^{ax},e^{bx}\}$ will contain two of the same elements (the element $1$), which means that the set would be linearly dependent.

Guest said:
The matrix is:

$ \begin{aligned} \begin{bmatrix}1 & e^{ax} & e^{bx} \\ 0 & a e^{ax} & be^{bx} \\ 0 & a^2 e^{ax} & b^2e^{bx}\end{bmatrix} =\begin{bmatrix}1 & e^{ax} & e^{bx} \\ 0 & a e^{ax} & be^{bx} \\ 0 & 0 & b^2e^{bx}-abe^{bx}\end{bmatrix}\end{aligned}$

Did you mean determinant? Even so, why are the two equal?

To prove that $\{1,e^{ax}, e^{bx}\}$ is a linearly independent set of functions, I'll suppose there is a linear dependence relation

$$c_1 + c_2 e^{ax} + c_3 e^{bx} = 0\tag{*}$$

where $c_1,c_2,c_3\in \Bbb R$, and show that $c_1 = c_2 = c_3 = 0$. Evaluating at $x = 0$ gives

$$c_1 + c_2 + c_3 = 0.$$

Taking the derivative of $(*)$ with respect to $x$ and evaluating at $x = 0$, we get

$$ac_2 + bc_3 = 0,$$

or $ac_2 = -bc_3$. Finally, taking the second derivative of $(*)$ with respect to $x$ and evaluating at $x = 0$, we obtain

$$a^2 c_2 + b^2 c_3 = 0,$$

that is, $a^2 c_2 = -b^2 c_3$. Therefore

$$a^2 c_2 = -b^2 c_3 = b(-bc_3) = b(ac_2) = abc_2.$$

So $a(a - b)c_2 = (a^2 - ab)c_2 = 0$. Since $a\neq 0$ and $a\neq b$, we must have $c_2 = 0$. Now $bc_3 = -ac_2 = 0$, so as $b\neq 0$ we have $c_3 = 0$. Finally, $0 = c_1 + c_2 + c_3 = c_1 + 0 + 0 = c_1$. We have now shown that $c_1 = c_2 = c_3 = 0$.
It's a basis for this subspace since it's linearly independent and it spans it. So $\text{dim}(l(x)) = 3$.

That's right.
 
Hi, Euge. Thanks for such a nice way of doing this.

Euge said:
Did you mean determinant? Even so, why are the two equal?
I meant the matrix. I should have used $\to$, not $=$. I row reduced the matrix (basically subtracted $a$ times row $2$ from row $3$) to echelon form so that I could extract the determinant as product of entries of the diagonal: $1 \cdot ae^{ax} \cdot (b^2e^{bx}-ab e^{bx})$ and this is only zero when $a=0, b = 0$ (which I missed earlier), and when $a=b$. My book failed to mention the condition that $a,b$ are nonzero.
 
Last edited:

Similar threads

  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K