Finding the intersection of two functions via linearization

Click For Summary

Homework Help Overview

The discussion revolves around finding the intersection points of two functions, f(x) = tan(x^2) - 1 and g(x) = ln((x+1)^3)/3, using linearization techniques. Participants are exploring how to apply linearization to approximate the intersection points and assess the relative error in their estimates.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning, Assumption checking

Approaches and Questions Raised

  • Participants discuss the concept of linearization and its application to the functions in question. There are attempts to clarify how to define linearization in this context, with some suggesting the use of derivatives and educated guesses for initial values. Others propose using Taylor series for approximation. The method of Newton's Method is also mentioned as a potential approach for finding intersections.

Discussion Status

The conversation is active, with participants sharing insights and methods for approaching the problem. Some have provided specific equations related to Newton's Method and expressed confidence in using computational tools like Maple and Wolfram Alpha to verify their results. However, there is no explicit consensus on the best approach, and various interpretations of linearization are being explored.

Contextual Notes

Participants are navigating the challenges of determining relative error and the appropriateness of linearization for this problem. There are mentions of potential limitations in the methods discussed, such as the risk of divergence in Newton's Method, and the need for careful selection of initial values.

srvs
Messages
31
Reaction score
0
At least I think it's via linearization.

Let

[itex]f(x) = \tan (x^2) - 1[/itex]

and

[itex]g(x) = \frac{\ln((x+1)^3)}{3}[/itex]

Find the smallest positive and negative intersection with a relative error of less than 0.001.

I don't know. You can linearize one or both, yeah, but you don't have any analytical value to compare it with, how would go determine the relative error? Can anyone put me on the right track?
 
Last edited:
Physics news on Phys.org
The concept of linearization I know of is one where you use the derivative

to do a linear approximation to the function at a point. But I don't see how that

fits here. How are you defining linearization?
 
Well, eh, the same way. But I figured because you can't simply do [itex]f(x) = g(x)[/itex] you'd have to make an educated guess where they would intersect from a picture and then do a linear approximation in a point nearby, and solve the much easier equation.

But yeah, I have no idea how you would determine the error. Perhaps it's not even via linearization as you say. I honestly have no idea how to begin. Perhaps with Taylor series?
 
First you have to linearize about a given point. Choose some reasonable value of x and linearize the two functions about that x value. Find where the linearized equations intersect. Use that value to linearize again. You can be sure the solution is within a given range of x if two consecutive solutions are. If you were to subtract the two equations and use this method to find where their difference is 0, you would be using "Newton's method".
 
Ahh, that makes sense. Thank you!

In case anyone is interested:

[itex]0 = f'(x_0)(x - x_0) + f(x_0) - (g'(x_0)(x - x0) + g(x_0))[/itex]

So that

[itex]x_1 = \frac{f'(x_0)x_0 - f(x_0) - g'(x_0)x_0 + g(x_0)}{f'(x_0) - g'(x_0)}[/itex]

I then plugged that in Maple; Wolframalpha confirms it gives the correct intersections.
 
srvs said:
Ahh, that makes sense. Thank you!

In case anyone is interested:

[itex]0 = f'(x_0)(x - x_0) + f(x_0) - (g'(x_0)(x - x0) + g(x_0))[/itex]

So that

[itex]x_1 = \frac{f'(x_0)x_0 - f(x_0) - g'(x_0)x_0 + g(x_0)}{f'(x_0) - g'(x_0)}[/itex]

I then plugged that in Maple; Wolframalpha confirms it gives the correct intersections.

Why do you use Maple and then use Wolfram Alpha? Maple can do the whole job just fine.

You are solving the equation F(x) = 0, where F(x) = f(x) - g(x), and are using Newton's Method
[tex]x_{n+1} = x_n - \frac{F(x_n)}{F'(x_n)},\: n=0,1,2,\ldots[/tex]
You need to be careful: Newton's Method can sometimes diverge instead of converging to the root of F(x) = 0. However, if the derivative F'(r) ≠ 0 at the root r of F(x) = 0 (so that x = r is not a multiple root), and if we start with x0 sufficiently near r, then we will get very rapid convergence to r.

RGV
 
Last edited:
Ray Vickson said:
Why do you use Maple and then use Wolfram Alpha? Maple can do the whole job just fine.

You are solving the equation F(x) = 0, where F(x) = f(x) - g(x), and are using Newton's Method
[tex]x_{n+1} = x_n - \frac{F(x_n)}{F'(x_n)},\: n=0,1,2,\ldots[/tex]
You need to be careful: Newton's Method can sometimes diverge instead of converging to the root of F(x) = 0. However, if the derivative F'(r) ≠ 0 at the root r of F(x) = 0 (so that x = r is not a multiple root), and if we start with x0 sufficiently near r, then we will get very rapid convergence to r.

RGV
To check whether it was correct. Maple did it just fine indeed, but I wanted to make sure :) Thank you everyone for your comment.
 
srvs said:
To check whether it was correct. Maple did it just fine indeed, but I wanted to make sure :) Thank you everyone for your comment.

What I really mean is: why would you trust Wolfram Alpha but not trust Maple?

RGV
 
I'm betting Ray Vickson is Canadian :) . Any takers?
 
  • #10
Bacle2 said:
I'm betting Ray Vickson is Canadian :) . Any takers?

Well, you're right, and before retiring I spent > 30 years at the University of Waterloo, where Maple was born. However: what does that have to do with the question I asked?

RGV
 
  • #11
Ray Vickson said:
What I really mean is: why would you trust Wolfram Alpha but not trust Maple?

RGV
Because I wrote the Maple code to find the solution whereas with WolframAlpha I can just input [itex]f(x) = g(x)[/itex] and it'll generate it for me. I trust Maple to execute my code properly, but I don't assume my code to be correct - and I assume Wolfram's is :)
 
  • #12
srvs said:
Because I wrote the Maple code to find the solution whereas with WolframAlpha I can just input [itex]f(x) = g(x)[/itex] and it'll generate it for me. I trust Maple to execute my code properly, but I don't assume my code to be correct - and I assume Wolfram's is :)

When I try Wolfram Alpha it does not generate code for me; it just gives a list of numerical solutions. When I do it in Maple (using 'fsolve' instead of 'solve') it gives me solutions one at a time; I can get a list of solutions either by using the 'avoid' option, or by giving a list of input intervals. For example:
S0:=fsolve(eq,x);
S0 := 0.
S1:=fsolve(eq,x,avoid={x=S0});
S1 := 0.6977070973
S2:=fsolve(eq,x,avoid={x=S0,x=S1});
S2 := 3.223162525
This has missed the solution at 1.993, but if we narrow the search intervals it will work.
S1:=fsolve(eq,x=S0+.01..S0+1);
S1 := 0.6977070973
S2:=fsolve(eq,x=S1+.01..S1+1);
--> no solution, so increase the interval
S2:=fsolve(eq,x=S1+.01..S1+2);
S2 := 1.993218238
S3:=fsolve(eq,x=S2+.01..S2+1);
S3 := 2.683224142
etc.

In fact, before setting out on any halfway complicated equation-solving task it is a good idea to first plot the functions involved. This will easily allow you to see what is going on in this example.

We could, of course, get more accuracy by increasing the 'digits' count:
Digits:=20;
fsolve(eq,x=0.001..1);
0.69770709730287239774

or
Digits:=40;
fsolve(eq,x=0.01..1);
0.6977070973028723977363487530371501244775

Admittedly, Wolfram Alpha (in *this* example) is a bit more convenient, because it gives a list of solution right away, while in Maple we have to work at it a bit.

BTW: both Wolfram Alpha and Maple likely use a combination of methods, such as Newton's Method, Regula Falsi, the Secant Method or several others in an adaptive way, tailoring the method to perceived problem behaviour. However, the default is usually Newton's Method.

RGV
 

Similar threads

Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 28 ·
Replies
28
Views
3K
  • · Replies 28 ·
Replies
28
Views
5K
Replies
9
Views
3K
Replies
2
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 8 ·
Replies
8
Views
4K