# Why does reduction of order work for linear ODEs?

This is not a homework problem, I just want to understand some theory behind this mathematical method.

Specifically, if we know that one solution is y1(t), then why is the second solution in the form

y2(t) = v(t) * y1(t)?​

Where v(t) is the function that you need to solve for. Why does this assumption always work?

Related Introductory Physics Homework Help News on Phys.org
SammyS
Staff Emeritus
Homework Helper
Gold Member
This is not a homework problem, I just want to understand some theory behind this mathematical method.

Specifically, if we know that one solution is y1(t), then why is the second solution in the form

y2(t) = v(t) * y1(t)?​

Where v(t) is the function that you need to solve for. Why does this assumption always work?
The Wikipedia entry for reduction of order gives a good explanation.

http://en.wikipedia.org/wiki/Reduction_of_order