Linear Differential Equation: Dropping Absolute Value Bars

sweetreason
Messages
20
Reaction score
0
I'm doing some practice problems, and with the help of my solutions manual and wolfram alpha I've worked out a solution to

(x+1)\frac{dy}{dx} +xy = e^{-x}

However, I don't understand why we can drop the absolute value bars when we calculate the integrating factor:

e^{\int \frac{x}{x+1} dx} = e^{x- \ln|x+1|} = \frac{e^x}{x+1}

I understand that if we don't drop the absolute value bars in the last step, certain terms won't cancel when we multiply our equation by this integrating factor. But why are we justified in doing so? Usually we say something about how x > 0, but this isn't an initial value problem and x isn't specified to be in any particular interval. Can you please explain why we are justified in dropping the absolute value bars? How do we know that we aren't losing solutions when we do this?

(For a full solution to this see

http://www.wolframalpha.com/input/?i=(x+1)dy/dx++xy+=e^(-x) )

and click "show steps" under "Differential Equation Solutions".

Thanks so much!
 
Physics news on Phys.org
Try to think of the reason why the absolute value bars are there in the first place. And then try to reason out why they would be dropped when the natural logarithm is eliminated.
 
Well, I know the absolute value bars are there to extend the domain of the natural log function, which normally only has domain (0, +infinity). I guess what you're implying is that since e^{\ln x} = e^{\ln|x|} this implies x = |x|, or x \geq 0? Is that right?
 
I think It has something to do with the fact the the indefinite integral of 1/t is defined from 1 to x (because there is an asymptote at x=0), but if you are considering e raised to the power of that integral, I don't think there is a need to take into account the original complication. I'm really not sure now; you are asking a really good question. I'll look into it some more, but hopefully by then someone else will offer you a better explanation.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top