# Finding the intersection of two functions via linearization

At least I think it's via linearization.

Let

$f(x) = \tan (x^2) - 1$

and

$g(x) = \frac{\ln((x+1)^3)}{3}$

Find the smallest positive and negative intersection with a relative error of less than 0.001.

I don't know. You can linearize one or both, yeah, but you don't have any analytical value to compare it with, how would go determine the relative error? Can anyone put me on the right track?

Last edited:

Related Calculus and Beyond Homework Help News on Phys.org
Bacle2
The concept of linearization I know of is one where you use the derivative

to do a linear approximation to the function at a point. But I don't see how that

fits here. How are you defining linearization?

Well, eh, the same way. But I figured because you can't simply do $f(x) = g(x)$ you'd have to make an educated guess where they would intersect from a picture and then do a linear approximation in a point nearby, and solve the much easier equation.

But yeah, I have no idea how you would determine the error. Perhaps it's not even via linearization as you say. I honestly have no idea how to begin. Perhaps with Taylor series?

HallsofIvy
Homework Helper
First you have to linearize about a given point. Choose some reasonable value of x and linearize the two functions about that x value. Find where the linearized equations intersect. Use that value to linearize again. You can be sure the solution is within a given range of x if two consecutive solutions are. If you were to subtract the two equations and use this method to find where their difference is 0, you would be using "Newton's method".

Ahh, that makes sense. Thank you!

In case anyone is interested:

$0 = f'(x_0)(x - x_0) + f(x_0) - (g'(x_0)(x - x0) + g(x_0))$

So that

$x_1 = \frac{f'(x_0)x_0 - f(x_0) - g'(x_0)x_0 + g(x_0)}{f'(x_0) - g'(x_0)}$

I then plugged that in Maple; Wolframalpha confirms it gives the correct intersections.

Ray Vickson
Homework Helper
Dearly Missed
Ahh, that makes sense. Thank you!

In case anyone is interested:

$0 = f'(x_0)(x - x_0) + f(x_0) - (g'(x_0)(x - x0) + g(x_0))$

So that

$x_1 = \frac{f'(x_0)x_0 - f(x_0) - g'(x_0)x_0 + g(x_0)}{f'(x_0) - g'(x_0)}$

I then plugged that in Maple; Wolframalpha confirms it gives the correct intersections.
Why do you use Maple and then use Wolfram Alpha? Maple can do the whole job just fine.

You are solving the equation F(x) = 0, where F(x) = f(x) - g(x), and are using Newton's Method
$$x_{n+1} = x_n - \frac{F(x_n)}{F'(x_n)},\: n=0,1,2,\ldots$$
You need to be careful: Newton's Method can sometimes diverge instead of converging to the root of F(x) = 0. However, if the derivative F'(r) ≠ 0 at the root r of F(x) = 0 (so that x = r is not a multiple root), and if we start with x0 sufficiently near r, then we will get very rapid convergence to r.

RGV

Last edited:
Why do you use Maple and then use Wolfram Alpha? Maple can do the whole job just fine.

You are solving the equation F(x) = 0, where F(x) = f(x) - g(x), and are using Newton's Method
$$x_{n+1} = x_n - \frac{F(x_n)}{F'(x_n)},\: n=0,1,2,\ldots$$
You need to be careful: Newton's Method can sometimes diverge instead of converging to the root of F(x) = 0. However, if the derivative F'(r) ≠ 0 at the root r of F(x) = 0 (so that x = r is not a multiple root), and if we start with x0 sufficiently near r, then we will get very rapid convergence to r.

RGV
To check whether it was correct. Maple did it just fine indeed, but I wanted to make sure :) Thank you everyone for your comment.

Ray Vickson
Homework Helper
Dearly Missed
To check whether it was correct. Maple did it just fine indeed, but I wanted to make sure :) Thank you everyone for your comment.
What I really mean is: why would you trust Wolfram Alpha but not trust Maple?

RGV

Bacle2
I'm betting Ray Vickson is Canadian :) . Any takers?

Ray Vickson
Homework Helper
Dearly Missed
I'm betting Ray Vickson is Canadian :) . Any takers?
Well, you're right, and before retiring I spent > 30 years at the University of Waterloo, where Maple was born. However: what does that have to do with the question I asked?

RGV

What I really mean is: why would you trust Wolfram Alpha but not trust Maple?

RGV
Because I wrote the Maple code to find the solution whereas with WolframAlpha I can just input $f(x) = g(x)$ and it'll generate it for me. I trust Maple to execute my code properly, but I don't assume my code to be correct - and I assume Wolfram's is :)

Ray Vickson
Homework Helper
Dearly Missed
Because I wrote the Maple code to find the solution whereas with WolframAlpha I can just input $f(x) = g(x)$ and it'll generate it for me. I trust Maple to execute my code properly, but I don't assume my code to be correct - and I assume Wolfram's is :)
When I try Wolfram Alpha it does not generate code for me; it just gives a list of numerical solutions. When I do it in Maple (using 'fsolve' instead of 'solve') it gives me solutions one at a time; I can get a list of solutions either by using the 'avoid' option, or by giving a list of input intervals. For example:
S0:=fsolve(eq,x);
S0 := 0.
S1:=fsolve(eq,x,avoid={x=S0});
S1 := 0.6977070973
S2:=fsolve(eq,x,avoid={x=S0,x=S1});
S2 := 3.223162525
This has missed the solution at 1.993, but if we narrow the search intervals it will work.
S1:=fsolve(eq,x=S0+.01..S0+1);
S1 := 0.6977070973
S2:=fsolve(eq,x=S1+.01..S1+1);
--> no solution, so increase the interval
S2:=fsolve(eq,x=S1+.01..S1+2);
S2 := 1.993218238
S3:=fsolve(eq,x=S2+.01..S2+1);
S3 := 2.683224142
etc.

In fact, before setting out on any halfway complicated equation-solving task it is a good idea to first plot the functions involved. This will easily allow you to see what is going on in this example.

We could, of course, get more accuracy by increasing the 'digits' count:
Digits:=20;
fsolve(eq,x=0.001..1);
0.69770709730287239774

or
Digits:=40;
fsolve(eq,x=0.01..1);
0.6977070973028723977363487530371501244775

Admittedly, Wolfram Alpha (in *this* example) is a bit more convenient, because it gives a list of solution right away, while in Maple we have to work at it a bit.

BTW: both Wolfram Alpha and Maple likely use a combination of methods, such as Newton's Method, Regula Falsi, the Secant Method or several others in an adaptive way, tailoring the method to perceived problem behaviour. However, the default is usually Newton's Method.

RGV