Differential Equations and a vector analogy (weird question)

In summary, you can check if three solutions to a third order differential equation are linearly independent by calculating their Wronskian, which is analogous to taking the cross product of two vectors. This is because the Wronskian is just a determinant of a matrix, and for n vectors in n dimensional space, the determinant of the matrix formed by the vectors is a test for independence. If the Wronskian is zero, the solutions may still be linearly independent, but it cannot be determined conclusively. However, if the Wronskian is not zero, the solutions are guaranteed to be linearly independent. This concept of linear independence also applies to vectors in n dimensional space, where the determinant of the matrix formed by
  • #1
1MileCrash
1,342
41
Suppose I have a third order differential equation, and have three solutions, y1, y2, y3.

I can check to see if they are linearly independent as such: if their wronskian is non-zero, they are linearly independent.

But the wronskian is just a determinant of a matrix.

y1 y2 y3
y1' y2' y3'
y1'' y2'' y3''This is extremely analogous to vectors, because if I have two vectors in some based system (e1,e2,e3), I can see if these two vectors are linearly independent by taking their cross product. If the cross product is non-zero, then the vectors are linearly independent.

But the cross product is just the determinant of a matrix as well.

e1 e2 e3
a1 a2 a3
b1 b2 b3
So if I'm doing the exact same thing, these two ideas must be related, no?If we let some based system be (y1,y2,y3) (the idea of having a system based on the functions seems weird and kind of circular, for in order for them to form a basis they have to be linearly independent in the first place, but I'll keep typing anyway..)

Then showing that the vectors

(y1',y2',y3') and (y1'', y2'', y3'') are linearly independent becomes the exact same task as showing that y1, y2, and y3 are linearly independent solutions to a differential equation.Why?

(I realize that it could be the case that the two "linearly independent" labels are not related for vectors and solutions to DEs in any way, but I chose to assume that there must be some correlation.)
 
Mathematics news on Phys.org
  • #2
You are right, they are closely related. Here is a breakdown of the logical relationship here.

1. The space of functions you are talking about is the space of solutions of a homogeneous, linear ordinary differential equation (of order 3 in your example).

2. You have 3 particular solutions and you want to know whether c1y1+c2y2+c3y3 is the "general solution." That means you want to know whether every possible solution can be written this way. In other words, do the functions y1, y2, y3 form a basis for the solutions of the given homogeneous linear ODE?

3. The Existence and Uniqueness Theorem for linear equations says that a solution is uniquely determined from its initial data y(0), y'(0), y''(0). This part is essential. It gives us a way to check whether we have found all possible solutions to the equation. We just hvae to check whether c1y1+c2y2+c3y3 can cover all possible initial data. In other words, y1,y2,y3 generate all solutions if we can solve for c1,c2,c3 given any values for y(0),y'(0),y''(0).

4. Now, it is a linear algebra problem. For any triple [itex](y_0, y_0', y_0'')[/itex], can we solve for (c_1,c_2,c_3)?:
[itex]
\left(\begin{array}{ccc}
y_1(0) & y_2(0) & y_3(0) \\
y'_1(0) & y'_2(0) & y'_3(0)\\
y_1''(0) & y_2''(0) & y_3''(0)\end{array}\right)
\left(\begin{array}{c} c_1 \\ c_2 \\ c_3 \end{array}\right)
=\left(\begin{array}{c} y_0 \\ y'_0 \\ y''_0 \end{array}\right)
[/itex]

5. The answer is yes, as long as the Wronskian is not zero. We conclude that if the wronskian is not zero, the functions y1,y2,y3 generate all possible solutions to the equation.

6. It can be shown that the Wronskian has the form [itex] W(t) = Ae^{P(t)}[/itex], for some function P(t). Because this is exponential, it cannot be zero unless A=0. In other words, it is always 0 or never 0. Therefore, if y1, y2, y3 are independent at one point, they are independent at every point.
 
  • #3
Wow, thanks for the fantastic reply!

I didn't make the very clear connection- that if y1 y2 and y3 are linearly independent, then any solution is a linear combination of those 3 (or can be expressed as such.) Which is exactly the relationship between a vector basis and some vector that exists in that space.

So these are the all the same idea, a group of three fundamental solutions form a type of "vector space" of all other solutions.

That is just awesome.

Also- if I understand step 5 correctly- it seems to imply that if the wronskian is zero, we can't really say anything about (c1,c2,c3), so does this mean that if the wronskian is 0, we can't really say that the solutions are linearly dependent? (The converse is not true?)
 
  • #4
The reason why I did a 3rd order DE is because the cross product is not defined for 2d (thus I had no way to relate it to the wronskian) but now I see that doesn't matter. It's still the same idea.

For the two dimensional case (a second order DE and 2d vector space) 2 vectors in R^2 are always linearly independent provided they are not parallel. There would be an analog when considering y1 and y2 as possibly linearly independent solutions.

The wronskian being zero would imply that:

Y1Y2' = Y1'Y2

Y1/Y1' = Y2/Y2'

Which implies that:

Y1 =kY2

Where k is some constant.

Which I would say is just as specific as two lines being parallel. If two vectors are parallel, it is also true that the ratio of their components are the same.

So for a second order DE, two solutions are almost always linearly independent, unless one of them is equivalent to a multiple of the other, just like vectors.

So many ideas in math are really the same idea with infinitely many applications and interpretations. :)
 
  • #5
Yeah, there are a lot of connections. Cool, isn't it.

Since you are talking in terms of cross products and parallel vectors and such, I thought I would mention one other thing in case you didn't know this already. If you have n vectors v1, ... vn in n dimensional space ([itex] \mathbb{R}^n [/itex]), then those vectors generate an n dimensional parallelepiped (like a parallelogram but n dimensional). The volume of that is equal to (plus or minus) the determinant of the matrix formed by placing the vectors as columns. Those vectors are independent if that volume is not zero. If the volume is 0, it is because the parellelepiped is degenerate and does not fully occupy n dimensions. In other words, the test for independence of vectors is whether the determinant of this matrix is 0 or not.

Anyway, that explains why the Wronskian is always just a determinant no matter what the order of the equation.
 
  • #6
Yes, we used the idea of the parallelepiped for something.. I believe the scalar triple (AxB) * C gave us the volume of it. 0 volume meant A , B, C arecoplanar, which in 3d just means they are not linearly independent.

We never really went into a parallelepiped of dimensions higher than 3.

That class is really the reason why I tend to see a lot of things in terms of vectors.
 

1. What are differential equations?

Differential equations are mathematical equations that describe the relationship between a function and its derivatives. They are commonly used to model physical phenomena and are essential in many fields of science and engineering.

2. How are differential equations solved?

Differential equations can be solved analytically, using mathematical techniques such as separation of variables or integrating factors. They can also be solved numerically, using computer algorithms such as Euler's method or Runge-Kutta methods.

3. What is a vector analogy in relation to differential equations?

A vector analogy is a way of visualizing and understanding differential equations by comparing them to vector operations. For example, the derivative of a function can be thought of as the velocity of a particle moving along a curve, similar to the direction and magnitude of a vector in space.

4. How does a vector analogy help in understanding differential equations?

A vector analogy can help in understanding differential equations by providing a more intuitive and visual representation of the concepts involved. It can also make it easier to grasp the relationships between different variables and how they change over time.

5. Can differential equations be applied to real-world problems?

Yes, differential equations are commonly used in various fields of science and engineering to model and solve real-world problems. They have applications in physics, chemistry, biology, economics, and many other areas.

Similar threads

  • General Math
Replies
13
Views
1K
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • General Math
Replies
1
Views
1K
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
7
Views
444
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • General Math
Replies
11
Views
1K
Back
Top