Converting Non-Linear Functions to Linear: Exploring the Possibility

In summary, the conversation explores the possibility of converting non-linear functions into a system of linear functions to make integration easier. Various approaches, such as changing variables or factoring, are discussed but ultimately it is determined that this approach may not be successful. The concept of local Euclidean and its application in differential geometry is mentioned, and there is a debate about whether it is the function itself or the change of the function that is considered linear in differentiation.
  • #1
TheoEndre
42
3
Hello everyone,
I've always had this question in my mind: Can we convert the non-linear function into a system of linear functions?
I don't know if this is actually something exist in math (I searched a little bit to be honest), but I'm really interested in this question because it would make integral much easier ( and probably other things).
for example:
##\int_{}^{}\left(x^2\pm{k}\right)^n## where ##n## , ##k## are integers
If we could just decrease the power of ##x^2## into the first degree, it would be much easier to find rather than using trig substitution.
 
Mathematics news on Phys.org
  • #2
All I can think of is either using a change of variable : ##x^2=u ## , but then you have ##2xdx = du ## and you need to make needed changes or factoring ## x^2 \pm k ## into ## (x+i\sqrt k )(x-i \sqrt k) ## but then the integral is not multiplicative. But then the next trick is to try Wolfram's ...;).
 
  • Like
Likes TheoEndre
  • #3
TheoEndre said:
Hello everyone,
I've always had this question in my mind: Can we convert the non-linear function into a system of linear functions?
I don't know if this is actually something exist in math (I searched a little bit to be honest), but I'm really interested in this question because it would make integral much easier ( and probably other things).
for example:
##\int_{}^{}\left(x^2\pm{k}\right)^n## where ##n## , ##k## are integers
If we could just decrease the power of ##x^2## into the first degree, it would be much easier to find rather than using trig substitution.
I don't think this will get you anywhere. Suppose the integral was ##\int (x^2 + k)^1dx##; i.e., with n in your formula set to 1. Replacing ##x^2## by ##x## gets you a very different antiderivative.
 
  • Like
Likes TheoEndre
  • #4
WWGD said:
All I can think of is either using a change of variable : ##x^2=u ## , but then you have ##2xdx = du ## and you need to make needed changes or factoring ## x^2 \pm k ## into ## (x+i\sqrt k )(x-i \sqrt k) ## but then the integral is not multiplicative. But then the next trick is to try Wolfram's ...;).

Mark44 said:
I don't think this will get you anywhere. Suppose the integral was ##\int (x^2 + k)^1dx##; i.e., with n in your formula set to 1. Replacing ##x^2## by ##x## gets you a very different antiderivative.

I see now the problem of this. But, aren't non-linear functions a system of linear functions on different infinitesimal intervals? When I see the graph of ##x^2##, I always think of zooming into a really small interval (I like to denote that interval ##[a,a+h]## where ##h## approaches ##0##), won't it be a line at that interval? even if it was a really small one.
 
  • #5
TheoEndre said:
I see now the problem of this. But, aren't non-linear functions a system of linear functions on different infinitesimal intervals? When I see the graph of ##x^2##, I always think of zooming into a really small interval (I like to denote that interval ##[a,a+h]## where ##h## approaches ##0##), won't it be a line at that interval? even if it was a really small one.
I think the best you can do in this regard is to approximate _ the local change of a function_ by a linear map when the function is differentiable at a point.
 
  • Like
Likes TheoEndre and fresh_42
  • #6
TheoEndre said:
I see now the problem of this. But, aren't non-linear functions a system of linear functions on different infinitesimal intervals? When I see the graph of ##x^2##, I always think of zooming into a really small interval (I like to denote that interval ##[a,a+h]## where ##h## approaches ##0##), won't it be a line at that interval? even if it was a really small one.
Yes. This property is called local Euclidean, and one of its applications are tangent spaces. It's the starting point of differential geometry.
 
  • Like
Likes TheoEndre
  • #7
fresh_42 said:
Yes. This property is called local Euclidean, and one of its applications are tangent spaces. It's the starting point of differential geometry.
Ooh! I really thank you for this. I haven't studied differential geometry yet so I didn't know it had these awesome topics. Thanks to you I have now something to answer my questions!
And thanks to @WWGD and @Mark44 for their answers, they were really helpful.
 
  • Like
Likes WWGD
  • #8
fresh_42 said:
Yes. This property is called local Euclidean, and one of its applications are tangent spaces. It's the starting point of differential geometry.
But is it the function itself or the change of the function that are considered linear?
 
  • #9
WWGD said:
But is it the function itself or the change of the function that are considered linear?
When it comes to differentiation, then there are so many different views on the same thing, that it's confusing. I once listed some out of curiosity, but stopped at ten without even mention the word slope. In the end it's always a directional derivative, a measure of change in a certain direction. The process itself is linear (differentiation), and the change in the sense of slope defines a linear function as approximation (##x \mapsto f\,'(a)\cdot x##) in the small neighborhood the OP mentioned. Of course the function itself doesn't change. And he already instinctively mentioned the limits of such an approach:
TheoEndre said:
won't it be a line at that interval? even if it was a really small one.
i.e. the smaller the interval is, the more accurate the approximation will be, which implies never equal (except for linear functions), but the famous ##f(x)=f(a) + f\,'(a)\cdot (x-a) + o(x-a)## instead. Now we can discuss the remainder. Our prof tortured us with remainder estimations of the Taylor series in all variants (real one-dimensional, complex, real multi-dimensional) and of course I've forgotten all of them.

I also forgot to add, that this local condition doesn't make integration any easier. In the end we would find ourselves confronted with Riemannian sums again ... until someone will show up and say: "Lebesgue - forget Riemann".
 
  • Like
Likes TheoEndre and WWGD
  • #10
fresh_42 said:
When it comes to differentiation, then there are so many different views on the same thing, that it's confusing. I once listed some out of curiosity, but stopped at ten without even mention the word slope. In the end it's always a directional derivative, a measure of change in a certain direction. The process itself is linear (differentiation), and the change in the sense of slope defines a linear function as approximation (##x \mapsto f\,'(a)\cdot x##) in the small neighborhood the OP mentioned. Of course the function itself doesn't change. And he already instinctively mentioned the limits of such an approach:

i.e. the smaller the interval is, the more accurate the approximation will be, which implies never equal (except for linear functions), but the famous ##f(x)=f(a) + f\,'(a)\cdot (x-a) + o(x-a)## instead. Now we can discuss the remainder. Our prof tortured us with remainder estimations of the Taylor series in all variants (real one-dimensional, complex, real multi-dimensional) and of course I've forgotten all of them.

I also forgot to add, that this local condition doesn't make integration any easier. In the end we would find ourselves confronted with Riemannian sums again ... until someone will show up and say: "Lebesgue - forget Riemann".
I see, we can then describe the value of the function nearby thanks to the approximation given by the differential. Yes, and I agree with the confusion re all different definitions.
 
  • #11
fresh_42 said:
When it comes to differentiation, then there are so many different views on the same thing, that it's confusing. I once listed some out of curiosity, but stopped at ten without even mention the word slope. In the end it's always a directional derivative, a measure of change in a certain direction. The process itself is linear (differentiation), and the change in the sense of slope defines a linear function as approximation (##x \mapsto f\,'(a)\cdot x##) in the small neighborhood the OP mentioned. Of course the function itself doesn't change. And he already instinctively mentioned the limits of such an approach:

i.e. the smaller the interval is, the more accurate the approximation will be, which implies never equal (except for linear functions), but the famous ##f(x)=f(a) + f\,'(a)\cdot (x-a) + o(x-a)## instead. Now we can discuss the remainder. Our prof tortured us with remainder estimations of the Taylor series in all variants (real one-dimensional, complex, real multi-dimensional) and of course I've forgotten all of them.

I also forgot to add, that this local condition doesn't make integration any easier. In the end we would find ourselves confronted with Riemannian sums again ... until someone will show up and say: "Lebesgue - forget Riemann".
Ah, yes, I fell into this confusion myself: this is the function as approximated, within the tangent plane. Always fall for it.
 

1. What is the purpose of converting non-linear functions to linear?

The purpose of converting non-linear functions to linear is to make them easier to analyze and manipulate. Linear functions have a constant rate of change, which allows for simpler calculations and predictions.

2. How do you determine if a function is non-linear?

A function is non-linear if it does not have a constant rate of change. This means that the slope of the function is not constant and the graph is not a straight line. Non-linear functions can also have more than one variable or exponent terms.

3. What is the process for converting a non-linear function to linear?

The process for converting a non-linear function to linear involves manipulating the original function to eliminate any exponents or multiple variables. This is typically done by taking the logarithm or square root of the function. Once the function is in a linear form, it can be graphed and analyzed using linear methods.

4. Are there any limitations to converting non-linear functions to linear?

Yes, there are limitations to converting non-linear functions to linear. Some functions are inherently non-linear and cannot be transformed into a linear form. Additionally, converting a non-linear function to linear may result in a loss of information or accuracy, so it is important to carefully consider the purpose and implications of the conversion.

5. What are some real-world applications of converting non-linear functions to linear?

Converting non-linear functions to linear can be useful in various scientific fields, such as physics, chemistry, and biology. For example, it can help in analyzing the relationship between variables in a chemical reaction or predicting the growth of a population over time. It is also commonly used in data analysis and modeling to simplify complex relationships and make predictions based on linear trends.

Similar threads

Replies
8
Views
2K
  • General Math
Replies
12
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
768
Replies
2
Views
1K
Replies
7
Views
1K
Replies
4
Views
425
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Differential Equations
Replies
3
Views
1K
  • Differential Equations
Replies
12
Views
1K
Back
Top