Finding the intersection of two functions via linearization

In summary: RGVI'm betting Ray Vickson is Canadian :) . Any takers?When I try Wolfram Alpha it does not generate code for me; it just gives a list...
  • #1
srvs
31
0
At least I think it's via linearization.

Let

[itex]f(x) = \tan (x^2) - 1[/itex]

and

[itex]g(x) = \frac{\ln((x+1)^3)}{3}[/itex]

Find the smallest positive and negative intersection with a relative error of less than 0.001.

I don't know. You can linearize one or both, yeah, but you don't have any analytical value to compare it with, how would go determine the relative error? Can anyone put me on the right track?
 
Last edited:
Physics news on Phys.org
  • #2
The concept of linearization I know of is one where you use the derivative

to do a linear approximation to the function at a point. But I don't see how that

fits here. How are you defining linearization?
 
  • #3
Well, eh, the same way. But I figured because you can't simply do [itex]f(x) = g(x)[/itex] you'd have to make an educated guess where they would intersect from a picture and then do a linear approximation in a point nearby, and solve the much easier equation.

But yeah, I have no idea how you would determine the error. Perhaps it's not even via linearization as you say. I honestly have no idea how to begin. Perhaps with Taylor series?
 
  • #4
First you have to linearize about a given point. Choose some reasonable value of x and linearize the two functions about that x value. Find where the linearized equations intersect. Use that value to linearize again. You can be sure the solution is within a given range of x if two consecutive solutions are. If you were to subtract the two equations and use this method to find where their difference is 0, you would be using "Newton's method".
 
  • #5
Ahh, that makes sense. Thank you!

In case anyone is interested:

[itex]0 = f'(x_0)(x - x_0) + f(x_0) - (g'(x_0)(x - x0) + g(x_0))[/itex]

So that

[itex]x_1 = \frac{f'(x_0)x_0 - f(x_0) - g'(x_0)x_0 + g(x_0)}{f'(x_0) - g'(x_0)}[/itex]

I then plugged that in Maple; Wolframalpha confirms it gives the correct intersections.
 
  • #6
srvs said:
Ahh, that makes sense. Thank you!

In case anyone is interested:

[itex]0 = f'(x_0)(x - x_0) + f(x_0) - (g'(x_0)(x - x0) + g(x_0))[/itex]

So that

[itex]x_1 = \frac{f'(x_0)x_0 - f(x_0) - g'(x_0)x_0 + g(x_0)}{f'(x_0) - g'(x_0)}[/itex]

I then plugged that in Maple; Wolframalpha confirms it gives the correct intersections.

Why do you use Maple and then use Wolfram Alpha? Maple can do the whole job just fine.

You are solving the equation F(x) = 0, where F(x) = f(x) - g(x), and are using Newton's Method
[tex] x_{n+1} = x_n - \frac{F(x_n)}{F'(x_n)},\: n=0,1,2,\ldots [/tex]
You need to be careful: Newton's Method can sometimes diverge instead of converging to the root of F(x) = 0. However, if the derivative F'(r) ≠ 0 at the root r of F(x) = 0 (so that x = r is not a multiple root), and if we start with x0 sufficiently near r, then we will get very rapid convergence to r.

RGV
 
Last edited:
  • #7
Ray Vickson said:
Why do you use Maple and then use Wolfram Alpha? Maple can do the whole job just fine.

You are solving the equation F(x) = 0, where F(x) = f(x) - g(x), and are using Newton's Method
[tex] x_{n+1} = x_n - \frac{F(x_n)}{F'(x_n)},\: n=0,1,2,\ldots [/tex]
You need to be careful: Newton's Method can sometimes diverge instead of converging to the root of F(x) = 0. However, if the derivative F'(r) ≠ 0 at the root r of F(x) = 0 (so that x = r is not a multiple root), and if we start with x0 sufficiently near r, then we will get very rapid convergence to r.

RGV
To check whether it was correct. Maple did it just fine indeed, but I wanted to make sure :) Thank you everyone for your comment.
 
  • #8
srvs said:
To check whether it was correct. Maple did it just fine indeed, but I wanted to make sure :) Thank you everyone for your comment.

What I really mean is: why would you trust Wolfram Alpha but not trust Maple?

RGV
 
  • #9
I'm betting Ray Vickson is Canadian :) . Any takers?
 
  • #10
Bacle2 said:
I'm betting Ray Vickson is Canadian :) . Any takers?

Well, you're right, and before retiring I spent > 30 years at the University of Waterloo, where Maple was born. However: what does that have to do with the question I asked?

RGV
 
  • #11
Ray Vickson said:
What I really mean is: why would you trust Wolfram Alpha but not trust Maple?

RGV
Because I wrote the Maple code to find the solution whereas with WolframAlpha I can just input [itex]f(x) = g(x)[/itex] and it'll generate it for me. I trust Maple to execute my code properly, but I don't assume my code to be correct - and I assume Wolfram's is :)
 
  • #12
srvs said:
Because I wrote the Maple code to find the solution whereas with WolframAlpha I can just input [itex]f(x) = g(x)[/itex] and it'll generate it for me. I trust Maple to execute my code properly, but I don't assume my code to be correct - and I assume Wolfram's is :)

When I try Wolfram Alpha it does not generate code for me; it just gives a list of numerical solutions. When I do it in Maple (using 'fsolve' instead of 'solve') it gives me solutions one at a time; I can get a list of solutions either by using the 'avoid' option, or by giving a list of input intervals. For example:
S0:=fsolve(eq,x);
S0 := 0.
S1:=fsolve(eq,x,avoid={x=S0});
S1 := 0.6977070973
S2:=fsolve(eq,x,avoid={x=S0,x=S1});
S2 := 3.223162525
This has missed the solution at 1.993, but if we narrow the search intervals it will work.
S1:=fsolve(eq,x=S0+.01..S0+1);
S1 := 0.6977070973
S2:=fsolve(eq,x=S1+.01..S1+1);
--> no solution, so increase the interval
S2:=fsolve(eq,x=S1+.01..S1+2);
S2 := 1.993218238
S3:=fsolve(eq,x=S2+.01..S2+1);
S3 := 2.683224142
etc.

In fact, before setting out on any halfway complicated equation-solving task it is a good idea to first plot the functions involved. This will easily allow you to see what is going on in this example.

We could, of course, get more accuracy by increasing the 'digits' count:
Digits:=20;
fsolve(eq,x=0.001..1);
0.69770709730287239774

or
Digits:=40;
fsolve(eq,x=0.01..1);
0.6977070973028723977363487530371501244775

Admittedly, Wolfram Alpha (in *this* example) is a bit more convenient, because it gives a list of solution right away, while in Maple we have to work at it a bit.

BTW: both Wolfram Alpha and Maple likely use a combination of methods, such as Newton's Method, Regula Falsi, the Secant Method or several others in an adaptive way, tailoring the method to perceived problem behaviour. However, the default is usually Newton's Method.

RGV
 

1. What is linearization?

Linearization is a mathematical process used to approximate the behavior of a non-linear function by replacing it with a linear function. This is done by finding the tangent line at a specific point on the non-linear function.

2. Why is linearization useful?

Linearization is useful because it allows us to simplify complex functions and make them easier to work with. It also helps us approximate the behavior of a function in a specific region, which can be useful for making predictions or solving problems.

3. How do you find the intersection of two functions via linearization?

To find the intersection of two functions using linearization, you first need to find the linear approximation of each function. Then, set the two linear approximations equal to each other and solve for the point where they intersect. This point will be an approximation of the intersection of the two original functions.

4. What are the limitations of using linearization to find the intersection of two functions?

Linearization can only provide an approximation of the intersection of two functions, not the exact solution. This is because linearization assumes that the two functions are linear in the region of interest, which may not always be the case. Additionally, linearization only works well for functions that are differentiable at the point of interest.

5. Can linearization be used for any type of function?

No, linearization is only applicable to functions that are differentiable. If a function is not differentiable at a certain point, linearization cannot be used to find the intersection with another function at that point. Additionally, linearization may not be accurate for functions with more complex behavior, such as oscillations or sharp turns.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
301
  • Calculus and Beyond Homework Help
Replies
2
Views
289
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
710
  • Calculus and Beyond Homework Help
Replies
8
Views
478
  • Calculus and Beyond Homework Help
Replies
28
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
853
  • Calculus and Beyond Homework Help
Replies
2
Views
533
  • Calculus and Beyond Homework Help
Replies
21
Views
2K
  • Calculus and Beyond Homework Help
Replies
28
Views
4K
Back
Top