Basic question: First derivative test to detect whether a function is decreasing

seeker101
Messages
28
Reaction score
0
If the first derivative of a function f from R to R is negative on [a,b], it IS right to say that the function is decreasing on [a,b] right?

Are there any other ways of showing that the function is decreasing on [a,b]?
 
Mathematics news on Phys.org
Hi seeker101! :smile:
seeker101 said:
If the first derivative of a function f from R to R is negative on [a,b], it IS right to say that the function is decreasing on [a,b] right?

Yes.
Are there any other ways of showing that the function is decreasing on [a,b]?

No … for a differentiable R->R function, negative derivative at a point is the same as decreasing. :wink:
 
That's correct as a definition. But, what the OP is asking may be whether there are other ways of arguing that a function is decreasing over an interval.
 
Of course there are other ways to show a function is increasing (the same argument can be applied to decreasing) on an interval. For example, using the definition of increasing: if x,y are in [a,b] then show y > x implies f(y) >= f(x).

Example: Show that f(x) = x^2 is increasing on [0,2].
Let x and y belong to [0,2] with y > x. Then we can write y = x + e, for some e>0. Then f(y) = (x+e)^2 = x^2 + 2xe + e^2 > f(x) = x^2, since e^2 is >0 and 2xe is >0.
 
how about a function f is decreasing on [a,b] if (\forall x_1,x_2\in [a,b]) x_1<x_2 \implies f(x_1) \ge f(x_2)
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top