1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Differential Equations and a vector analogy (weird question)

  1. Mar 7, 2013 #1
    Suppose I have a third order differential equation, and have three solutions, y1, y2, y3.

    I can check to see if they are linearly independent as such: if their wronskian is non-zero, they are linearly independent.

    But the wronskian is just a determinant of a matrix.

    y1 y2 y3
    y1' y2' y3'
    y1'' y2'' y3''


    This is extremely analogous to vectors, because if I have two vectors in some based system (e1,e2,e3), I can see if these two vectors are linearly independent by taking their cross product. If the cross product is non-zero, then the vectors are linearly independent.

    But the cross product is just the determinant of a matrix as well.

    e1 e2 e3
    a1 a2 a3
    b1 b2 b3



    So if I'm doing the exact same thing, these two ideas must be related, no?


    If we let some based system be (y1,y2,y3) (the idea of having a system based on the functions seems weird and kind of circular, for in order for them to form a basis they have to be linearly independent in the first place, but I'll keep typing anyway..)

    Then showing that the vectors

    (y1',y2',y3') and (y1'', y2'', y3'') are linearly independent becomes the exact same task as showing that y1, y2, and y3 are linearly independent solutions to a differential equation.


    Why?

    (I realize that it could be the case that the two "linearly independent" labels are not related for vectors and solutions to DEs in any way, but I chose to assume that there must be some correlation.)
     
  2. jcsd
  3. Mar 8, 2013 #2
    You are right, they are closely related. Here is a breakdown of the logical relationship here.

    1. The space of functions you are talking about is the space of solutions of a homogeneous, linear ordinary differential equation (of order 3 in your example).

    2. You have 3 particular solutions and you want to know whether c1y1+c2y2+c3y3 is the "general solution." That means you want to know whether every possible solution can be written this way. In other words, do the functions y1, y2, y3 form a basis for the solutions of the given homogeneous linear ODE?

    3. The Existence and Uniqueness Theorem for linear equations says that a solution is uniquely determined from its initial data y(0), y'(0), y''(0). This part is essential. It gives us a way to check whether we have found all possible solutions to the equation. We just hvae to check whether c1y1+c2y2+c3y3 can cover all possible initial data. In other words, y1,y2,y3 generate all solutions if we can solve for c1,c2,c3 given any values for y(0),y'(0),y''(0).

    4. Now, it is a linear algebra problem. For any triple [itex](y_0, y_0', y_0'')[/itex], can we solve for (c_1,c_2,c_3)?:
    [itex]
    \left(\begin{array}{ccc}
    y_1(0) & y_2(0) & y_3(0) \\
    y'_1(0) & y'_2(0) & y'_3(0)\\
    y_1''(0) & y_2''(0) & y_3''(0)\end{array}\right)
    \left(\begin{array}{c} c_1 \\ c_2 \\ c_3 \end{array}\right)
    =\left(\begin{array}{c} y_0 \\ y'_0 \\ y''_0 \end{array}\right)
    [/itex]

    5. The answer is yes, as long as the Wronskian is not zero. We conclude that if the wronskian is not zero, the functions y1,y2,y3 generate all possible solutions to the equation.

    6. It can be shown that the Wronskian has the form [itex] W(t) = Ae^{P(t)}[/itex], for some function P(t). Because this is exponential, it cannot be zero unless A=0. In other words, it is always 0 or never 0. Therefore, if y1, y2, y3 are independent at one point, they are independent at every point.
     
  4. Mar 8, 2013 #3
    Wow, thanks for the fantastic reply!

    I didn't make the very clear connection- that if y1 y2 and y3 are linearly independent, then any solution is a linear combination of those 3 (or can be expressed as such.) Which is exactly the relationship between a vector basis and some vector that exists in that space.

    So these are the all the same idea, a group of three fundamental solutions form a type of "vector space" of all other solutions.

    That is just awesome.

    Also- if I understand step 5 correctly- it seems to imply that if the wronskian is zero, we can't really say anything about (c1,c2,c3), so does this mean that if the wronskian is 0, we can't really say that the solutions are linearly dependent? (The converse is not true?)
     
  5. Mar 8, 2013 #4
    The reason why I did a 3rd order DE is because the cross product is not defined for 2d (thus I had no way to relate it to the wronskian) but now I see that doesn't matter. It's still the same idea.

    For the two dimensional case (a second order DE and 2d vector space) 2 vectors in R^2 are always linearly independent provided they are not parallel. There would be an analog when considering y1 and y2 as possibly linearly independent solutions.

    The wronskian being zero would imply that:

    Y1Y2' = Y1'Y2

    Y1/Y1' = Y2/Y2'

    Which implies that:

    Y1 =kY2

    Where k is some constant.

    Which I would say is just as specific as two lines being parallel. If two vectors are parallel, it is also true that the ratio of their components are the same.

    So for a second order DE, two solutions are almost always linearly independent, unless one of them is equivalent to a multiple of the other, just like vectors.

    So many ideas in math are really the same idea with infinitely many applications and interpretations. :)
     
  6. Mar 8, 2013 #5
    Yeah, there are a lot of connections. Cool, isn't it.

    Since you are talking in terms of cross products and parallel vectors and such, I thought I would mention one other thing in case you didn't know this already. If you have n vectors v1, ... vn in n dimensional space ([itex] \mathbb{R}^n [/itex]), then those vectors generate an n dimensional parallelepiped (like a parallelogram but n dimensional). The volume of that is equal to (plus or minus) the determinant of the matrix formed by placing the vectors as columns. Those vectors are independent if that volume is not zero. If the volume is 0, it is because the parellelepiped is degenerate and does not fully occupy n dimensions. In other words, the test for independence of vectors is whether the determinant of this matrix is 0 or not.

    Anyway, that explains why the Wronskian is always just a determinant no matter what the order of the equation.
     
  7. Mar 8, 2013 #6
    Yes, we used the idea of the parallelepiped for something.. I believe the scalar triple (AxB) * C gave us the volume of it. 0 volume meant A , B, C arecoplanar, which in 3d just means they are not linearly independent.

    We never really went into a parallelepiped of dimensions higher than 3.

    That class is really the reason why I tend to see a lot of things in terms of vectors.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Differential Equations and a vector analogy (weird question)
  1. Weird equation (Replies: 4)

  2. Weird 0.9~ = 1 Question (Replies: 12)

Loading...