In many of my physics classes we have been using Taylor Expansions, and sometimes I get a bit confused. For example, I feel like different things are going on when one expands (1-x)^-2 vs. e^(-Ax^2), where I just have some constant in front of x^2 to help make my point. To keep things simple, I will expand around x=0. In the first case, it seems pretty simple to me. You just take the derivatives of (1-x)^2, plug in 0 for x, and multiply each term by (x^n)/n! to get 1+x^2+(x^4)/2 +... In the second case, Wolfman says the answer is 1 - (Ax^2) + ((Ax^2)^2)/2 - ... What I am confused about is why you essentially are replacing x with the entire value of Ax^2. To me, strictly using the formula, it seems like the second term should be the first derivative (-2x)*e^(-Ax^2), all evaluated at x=0, and then multiply by x. It seems as if instead we are sort of setting Ax^2 to x and then saying the derivative evaluated at 0 of our function is =1*e^(0), and then multiplying by the whole -Ax^2. I could probably figure out the pattern of problems like this, but I really don't get why these problems are different and why you are allowed to treat them (what appears to me, but probably isn't) differently. I am guessing it has something to do with the fact that there are two terms in the first case, together raised to the second power, whereas this isn't the case in the second situation, but it is all rather nebulous to me. I appreciate any input-- thank you!