Zeros of f(x) vs. Zeros of f'(x)

  • Thread starter drewfstr314
  • Start date
In summary, there are some similarities between the zeros of a function and the zeros of its derivative, but they are not always dependent on each other. The relationship between the two can be explored further by considering different types of functions, such as polynomials and non-polynomials. For polynomials with all roots of multiplicity one, the derivative does not have any zeros in common with the original function. However, there are some results for bounding the zeros of the derivative if the zeros of the original function are known. Overall, there is no definitive pattern for finding the zeros of a function if the zeros of its derivative are known (or vice versa).
  • #1
drewfstr314
20
0
Is there any similarity between the zeros of a function and the zeros of its derivative? That is, if

A = set of all x such that f(x) = 0
B = set of all x such that f'(x) = 0

then is there any pattern to finding A if B is known (or vice versa)?

Thanks!
 
Physics news on Phys.org
  • #2
I don't know a definitive answer, but here are some thoughts: Mostly so for functions of the form f(x)=(x-a)^n , but there is no necessary

dependence for all functions: look at, e.g., f(x)=sinx , which is 0 at k*Pi ,

but f'(x)=cosx is 0 at (2n+1)*Pi/2 . If f(x)=f'(x)=0 , then f has both a 0 and

a critical point at x . Then you can shift your function by a constant and the

relationship does not exist any more-- the zero is no more, has been shifted.
 
  • #3
Hey drewfstr314 and welcome to the forums.

IMO there are two important classes to think about: the first is where you have a true polynomial and the second is when you don't.

The examples of non-polynomials include anything with a proper power series where you have all non-zero co-effecients (or even some kind of shifted power series like a taylor series expansion with a non-zero center).

In the finite-case you should use the relationship for the anti-derivative of a*x^n and use the fact that the original equation can be written in terms of (x - c)(x - d)...(x - last_root) = 0, after which you relate the original expression to a new expression (i.e the antiderivate) in any way you can.

In the infinite-case you can't quite do the same thing, but a suggestion for this is to look at the exponential function and use a composition of functions and see how the deriatives change.

For the finite case, you might want to consider trying the following: suppose you have a integral expression for the roots of the integral (in other words, (x - a)(x - b)...(x - n) = 0 for the integral not the derivative). Now you can use the product rule to differentiate this expression and get something in terms of the derivative and how it should equal to 0.

You also have the reverse situation where you can start from the roots of the derivative (i.e (x - a)(x - b)...(x - n) = 0 but you have one less x and the coeffecients are different).

So basically the point I'm making is that you can generate quite a few identities for both equations and then equate the right ones together to get expression in terms of the other.

It's not obviously an answer, but the idea of mathematics is to use as many independent kinds of data as you can and then to bring them all together: the more linearly independent pieces of information you have, the more choices you have to analyze the said problem and the more you can do with it.

Having at least two inter-changeable non linearly independent pieces is what you need and in some cases, this is referred to as a duality. If you have dualities that have their own dualities then you get more and more independent expressions of the same thing and this generates a systematic way to analyze something since you are generating lots of independent ways of describing something.
 
  • #4
If f is a polynomial with all roots of multiplicity one, then the derivative does not have a zero in common with f. To see this, express f factored, use product rule for derivatives.
 
  • #5
One can show that if f is a differentiable function and f(a)= f(b)= 0, there must be at least one 0 of f' between a and b.
 
  • #6
I guess what I was looking for was something like this:

[itex]f(x) = x^3 - 4x + 2 \Rightarrow f'(x) = 3x^2 - 4[/itex]

and the solutions of f'(x)=0 are [itex]x = \pm \frac{2\sqrt3}{3}[/itex]

Based on this, is there any way to find the zeros of f(x)?

Thanks!
 
  • #7
drewfstr314 said:
I guess what I was looking for was something like this:

[itex]f(x) = x^3 - 4x + 2 \Rightarrow f'(x) = 3x^2 - 4[/itex]

and the solutions of f'(x)=0 are [itex]x = \pm \frac{2\sqrt3}{3}[/itex]

Based on this, is there any way to find the zeros of f(x)?

Thanks!

I don't think you can use this to find the zeros of f(x). The graph shows that f has 3 zeros. So all you can say (in this case), is that you got three zeros a, b and c and these must satisfy [itex]a\leq - \frac{2\sqrt{3}}{3}\leq b\leq \frac{2\sqrt{3}}{3}\leq c[/itex].

I don't think you can do much more.
 
  • #8
drewfstr314 said:
I guess what I was looking for was something like this:

[itex]f(x) = x^3 - 4x + 2 \Rightarrow f'(x) = 3x^2 - 4[/itex]

and the solutions of f'(x)=0 are [itex]x = \pm \frac{2\sqrt3}{3}[/itex]

Based on this, is there any way to find the zeros of f(x)?

Thanks!

Short answer no. For the simple reason that ##g(x) = x^3 -4x + 102## has the same derivative as ##f##, but they don't have the same roots.
 
  • #9
If ##f## is a polynomial, there are results for bounding the zeros of ##f'## know the zeros of ##f##. The Grace-Heawood theorem says that if ##z_1## and ##z_2## are distinct zeros of ##f##, ##f'## has a zero in the disk with center ##\frac12(z_1+z_2)## and radius ##\frac12|z_1-z_2|\cot(\pi/n)##. I also remember a theorem of Gauss saying that the zeros of ##f'## are precisely the multiple zeros of ##f## and the equilibrium points of a gravitatoryish filed generated by the zeros of ##f## with mass proportional to its multiplicity. The intensity of this gravitatoryish field is inversely proportional to the distance, rather than to its square.
If we say that gravitatory field is proportional to the square of the distance for the tridimensionality of space, I think law of our gravitatoryish is explained by the bidimensionality of the complex plane.
 

What are zeros of f(x)?

Zeros of f(x) are the values of x where the function f(x) intersects the x-axis. In other words, they are the values of x that make the function equal to zero.

What are zeros of f'(x)?

Zeros of f'(x) are the values of x where the derivative of the function f(x) equals zero. In other words, they are the values of x where the slope of the function is equal to zero.

What is the relationship between zeros of f(x) and zeros of f'(x)?

The zeros of f(x) and f'(x) are related because they represent the same points on the graph, but from different perspectives. The zeros of f(x) represent the x-intercepts of the function, while the zeros of f'(x) represent the points where the slope of the function is zero.

Why are zeros of f'(x) important?

Zeros of f'(x) are important because they can tell us critical information about the behavior of the function. For example, if a function has a zero of f'(x) at a certain value of x, it means that the function is either increasing or decreasing at that point. This can help us understand the overall shape and behavior of the function.

How can we use the zeros of f'(x) to find the zeros of f(x)?

We can use the zeros of f'(x) to find the zeros of f(x) by solving the derivative function for x. This will give us the x-values of the points where the slope of the function is zero, which are also the x-intercepts of the function. These x-values will be the same as the zeros of f(x).

Similar threads

Replies
5
Views
1K
Replies
8
Views
1K
Replies
1
Views
829
Replies
2
Views
998
Replies
2
Views
1K
Replies
11
Views
1K
Replies
20
Views
2K
Back
Top