Non-linear into linear

  • I
  • Thread starter TheoEndre
  • Start date
  • #1
42
3
Hello everyone,
I've always had this question in my mind: Can we convert the non-linear function into a system of linear functions?
I don't know if this is actually something exist in math (I searched a little bit to be honest), but I'm really interested in this question because it would make integral much easier ( and probably other things).
for example:
##\int_{}^{}\left(x^2\pm{k}\right)^n## where ##n## , ##k## are integers
If we could just decrease the power of ##x^2## into the first degree, it would be much easier to find rather than using trig substitution.
 

Answers and Replies

  • #2
WWGD
Science Advisor
Gold Member
5,421
3,685
All I can think of is either using a change of variable : ##x^2=u ## , but then you have ##2xdx = du ## and you need to make needed changes or factoring ## x^2 \pm k ## into ## (x+i\sqrt k )(x-i \sqrt k) ## but then the integral is not multiplicative. But then the next trick is to try Wolfram's ...;).
 
  • Like
Likes TheoEndre
  • #3
34,687
6,394
Hello everyone,
I've always had this question in my mind: Can we convert the non-linear function into a system of linear functions?
I don't know if this is actually something exist in math (I searched a little bit to be honest), but I'm really interested in this question because it would make integral much easier ( and probably other things).
for example:
##\int_{}^{}\left(x^2\pm{k}\right)^n## where ##n## , ##k## are integers
If we could just decrease the power of ##x^2## into the first degree, it would be much easier to find rather than using trig substitution.
I don't think this will get you anywhere. Suppose the integral was ##\int (x^2 + k)^1dx##; i.e., with n in your formula set to 1. Replacing ##x^2## by ##x## gets you a very different antiderivative.
 
  • Like
Likes TheoEndre
  • #4
42
3
All I can think of is either using a change of variable : ##x^2=u ## , but then you have ##2xdx = du ## and you need to make needed changes or factoring ## x^2 \pm k ## into ## (x+i\sqrt k )(x-i \sqrt k) ## but then the integral is not multiplicative. But then the next trick is to try Wolfram's ...;).
I don't think this will get you anywhere. Suppose the integral was ##\int (x^2 + k)^1dx##; i.e., with n in your formula set to 1. Replacing ##x^2## by ##x## gets you a very different antiderivative.
I see now the problem of this. But, aren't non-linear functions a system of linear functions on different infinitesimal intervals? When I see the graph of ##x^2##, I always think of zooming into a really small interval (I like to denote that interval ##[a,a+h]## where ##h## approaches ##0##), won't it be a line at that interval? even if it was a really small one.
 
  • #5
WWGD
Science Advisor
Gold Member
5,421
3,685
I see now the problem of this. But, aren't non-linear functions a system of linear functions on different infinitesimal intervals? When I see the graph of ##x^2##, I always think of zooming into a really small interval (I like to denote that interval ##[a,a+h]## where ##h## approaches ##0##), won't it be a line at that interval? even if it was a really small one.
I think the best you can do in this regard is to approximate _ the local change of a function_ by a linear map when the function is differentiable at a point.
 
  • Like
Likes TheoEndre and fresh_42
  • #6
14,389
11,706
I see now the problem of this. But, aren't non-linear functions a system of linear functions on different infinitesimal intervals? When I see the graph of ##x^2##, I always think of zooming into a really small interval (I like to denote that interval ##[a,a+h]## where ##h## approaches ##0##), won't it be a line at that interval? even if it was a really small one.
Yes. This property is called local Euclidean, and one of its applications are tangent spaces. It's the starting point of differential geometry.
 
  • Like
Likes TheoEndre
  • #7
42
3
Yes. This property is called local Euclidean, and one of its applications are tangent spaces. It's the starting point of differential geometry.
Ooh! I really thank you for this. I haven't studied differential geometry yet so I didn't know it had these awesome topics. Thanks to you I have now something to answer my questions!
And thanks to @WWGD and @Mark44 for their answers, they were really helpful.
 
  • Like
Likes WWGD
  • #8
WWGD
Science Advisor
Gold Member
5,421
3,685
Yes. This property is called local Euclidean, and one of its applications are tangent spaces. It's the starting point of differential geometry.
But is it the function itself or the change of the function that are considered linear?
 
  • #9
14,389
11,706
But is it the function itself or the change of the function that are considered linear?
When it comes to differentiation, then there are so many different views on the same thing, that it's confusing. I once listed some out of curiosity, but stopped at ten without even mention the word slope. In the end it's always a directional derivative, a measure of change in a certain direction. The process itself is linear (differentiation), and the change in the sense of slope defines a linear function as approximation (##x \mapsto f\,'(a)\cdot x##) in the small neighborhood the OP mentioned. Of course the function itself doesn't change. And he already instinctively mentioned the limits of such an approach:
won't it be a line at that interval? even if it was a really small one.
i.e. the smaller the interval is, the more accurate the approximation will be, which implies never equal (except for linear functions), but the famous ##f(x)=f(a) + f\,'(a)\cdot (x-a) + o(x-a)## instead. Now we can discuss the remainder. Our prof tortured us with remainder estimations of the Taylor series in all variants (real one-dimensional, complex, real multi-dimensional) and of course I've forgotten all of them.

I also forgot to add, that this local condition doesn't make integration any easier. In the end we would find ourselves confronted with Riemannian sums again ... until someone will show up and say: "Lebesgue - forget Riemann".
 
  • Like
Likes TheoEndre and WWGD
  • #10
WWGD
Science Advisor
Gold Member
5,421
3,685
When it comes to differentiation, then there are so many different views on the same thing, that it's confusing. I once listed some out of curiosity, but stopped at ten without even mention the word slope. In the end it's always a directional derivative, a measure of change in a certain direction. The process itself is linear (differentiation), and the change in the sense of slope defines a linear function as approximation (##x \mapsto f\,'(a)\cdot x##) in the small neighborhood the OP mentioned. Of course the function itself doesn't change. And he already instinctively mentioned the limits of such an approach:

i.e. the smaller the interval is, the more accurate the approximation will be, which implies never equal (except for linear functions), but the famous ##f(x)=f(a) + f\,'(a)\cdot (x-a) + o(x-a)## instead. Now we can discuss the remainder. Our prof tortured us with remainder estimations of the Taylor series in all variants (real one-dimensional, complex, real multi-dimensional) and of course I've forgotten all of them.

I also forgot to add, that this local condition doesn't make integration any easier. In the end we would find ourselves confronted with Riemannian sums again ... until someone will show up and say: "Lebesgue - forget Riemann".
I see, we can then describe the value of the function nearby thanks to the approximation given by the differential. Yes, and I agree with the confusion re all different definitions.
 
  • #11
WWGD
Science Advisor
Gold Member
5,421
3,685
When it comes to differentiation, then there are so many different views on the same thing, that it's confusing. I once listed some out of curiosity, but stopped at ten without even mention the word slope. In the end it's always a directional derivative, a measure of change in a certain direction. The process itself is linear (differentiation), and the change in the sense of slope defines a linear function as approximation (##x \mapsto f\,'(a)\cdot x##) in the small neighborhood the OP mentioned. Of course the function itself doesn't change. And he already instinctively mentioned the limits of such an approach:

i.e. the smaller the interval is, the more accurate the approximation will be, which implies never equal (except for linear functions), but the famous ##f(x)=f(a) + f\,'(a)\cdot (x-a) + o(x-a)## instead. Now we can discuss the remainder. Our prof tortured us with remainder estimations of the Taylor series in all variants (real one-dimensional, complex, real multi-dimensional) and of course I've forgotten all of them.

I also forgot to add, that this local condition doesn't make integration any easier. In the end we would find ourselves confronted with Riemannian sums again ... until someone will show up and say: "Lebesgue - forget Riemann".
Ah, yes, I fell into this confusion myself: this is the function as approximated, within the tangent plane. Always fall for it.
 

Related Threads on Non-linear into linear

Replies
1
Views
2K
  • Last Post
Replies
1
Views
1K
  • Last Post
Replies
3
Views
5K
  • Last Post
Replies
4
Views
7K
  • Last Post
Replies
12
Views
2K
Replies
3
Views
2K
Replies
1
Views
13K
  • Last Post
Replies
9
Views
742
Top