I A question about limits and infinity

Click For Summary
The discussion centers on understanding limits and infinity in calculus, particularly with functions like f(x) = 1/x and f(x) = 1/[sqrt(9 + x) - 3]. It highlights that as x approaches 0, f(x) approaches positive or negative infinity depending on the direction of approach. A rigorous proof involves showing that for any large number M, there exists a value of x such that f(x) exceeds M, emphasizing the need for a delta-epsilon approach to demonstrate this behavior near zero. Additionally, the conversation touches on precision errors in graphing functions and the importance of understanding the limits of numerical calculations in relation to function behavior. The discussion concludes with the clarification that the epsilon-delta definition does not apply to infinity in the same way it does to finite limits.
0kelvin
Messages
50
Reaction score
5
I hope I can make this question clear enough.

When we have a function such as f(x) = 1/x and calculate the side limits at x = 0, the right side goes to positive infinity. The left side goes to negative infinity. In calculus we are pluggin in values closer and closer to zero and seeing what the value of f(x) is. For ex: 1/10, then plug 1/100, then 1/500 so on. Is there a more rigorous way to prove that the function is in fact going to infinity?

another example: f(x) = 1/[sqrt(9 + x) - 3]. If x = 0 we have a division by zero. Now if I plug in something small such as 10^(-10), it's not zero but we are going beyond the precision of a hand calculator. If I plot this graph and zoom in enough, at some point google warns that the graph may be wrong due to precision errors. Is there some theory behind such precision errors and may lead us to think that the graph is increasing or decreasing when it's not?
 
Physics news on Phys.org
0kelvin said:
I hope I can make this question clear enough.

When we have a function such as f(x) = 1/x and calculate the side limits at x = 0, the right side goes to positive infinity. The left side goes to negative infinity. In calculus we are pluggin in values closer and closer to zero and seeing what the value of f(x) is. For ex: 1/10, then plug 1/100, then 1/500 so on. Is there a more rigorous way to prove that the function is in fact going to infinity?
You show that given any number ##M>0## there will be a value ##x>0## such that ##f(x)>M##. Since ##M## was arbitrary large, the function ##f(x)## grows beyond all limits. The negative version is according.

0kelvin said:
another example: f(x) = 1/[sqrt(9 + x) - 3]. If x = 0 we have a division by zero. Now if I plug in something small such as 10^(-10), it's not zero but we are going beyond the precision of a hand calculator. If I plot this graph and zoom in enough, at some point google warns that the graph may be wrong due to precision errors. Is there some theory behind such precision errors and may lead us to think that the graph is increasing or decreasing when it's not?
I don't understand this. Do you mean error calculations? This case isn't any different from the previous one.
 
0kelvin said:
another example: f(x) = 1/[sqrt(9 + x) - 3]. If x = 0 we have a division by zero. Now if I plug in something small such as 10^(-10), it's not zero but we are going beyond the precision of a hand calculator. If I plot this graph and zoom in enough, at some point google warns that the graph may be wrong due to precision errors. Is there some theory behind such precision errors and may lead us to think that the graph is increasing or decreasing when it's not?
Plugging in numbers might not be very helpful, but sketching a graph of ##f(x) = \frac 1 {\sqrt{x + 9} - 3}## would be very helpful.

First, sketch the graph of ##y = \sqrt{x + 9} - 3##. This left endpoint of this graph is at (-9, -3) and goes through the origin. This graph is strictly increasing on its domain.

The reciprocal function, ##f(x) = \frac 1 {\sqrt{x + 9} - 3}##, will have a vertical asymptote at x = 0. Since the denominator is negative for x < 0, the graph of f goes off to negative infinity. Since the denominator is positive for x > p, the graph of f goes off to positive infinity.
 
fresh_42 said:
You show that given any number ##M>0## there will be a value ##x>0## such that ##f(x)>M##. Since ##M## was arbitrary large, the function ##f(x)## grows beyond all limits. The negative version is according.


I don't understand this. Do you mean error calculations? This case isn't any different from the previous one.
You need to use the delta epsilon function basically it means if we add a positive value ,delta ,to x=0 and constrain this value closer to 0 you need to prove that 1/x is will always be larger than M a large constant for all values of delta. From the negative just subtract delta.
x<d ==>1/x>1/d
assign 1/d=M ==>1/x>M

you have to prove that this M exists and can be <= 1/d for all delta
 
Trollfaz said:
You need to use the delta epsilon function basically it means if we add a positive value ,delta ,to x=0 and constrain this value closer to 0 you need to prove that 1/x is will always be larger than M a large constant for all values of delta. From the negative just subtract delta.
x<d ==>1/x>1/d
assign 1/d=M ==>1/x>M

you have to prove that this M exists and can be <= 1/d for all delta
You mean that finding one value of ##x## is not enough? You need to show that the function is sufficiently large for all ##0 < x < \delta##? For some ##\delta## that depends on ##M##.
 
Trollfaz said:
You need to use the delta epsilon function basically it means if we add a positive value ,delta ,to x=0 and constrain this value closer to 0 you need to prove that 1/x is will always be larger than M a large constant for all values of delta. From the negative just subtract delta.
x<d ==>1/x>1/d
assign 1/d=M ==>1/x>M

you have to prove that this M exists and can be <= 1/d for all delta
The ##\varepsilon -\delta## wording doesn't apply to infinity. The definition of ##\longrightarrow \pm \infty ## is different from ##\longrightarrow L## since infinity isn't a number.
 
fresh_42 said:
The ##\varepsilon -\delta## wording doesn't apply to infinity. The definition of ##\longrightarrow \pm \infty ## is different from ##\longrightarrow L## since infinity isn't a number.
That may be so, but it doesn't change the fact that this is not correct:

fresh_42 said:
You show that given any number ##M>0## there will be a value ##x>0## such that ##f(x)>M##. Since ##M## was arbitrary large, the function ##f(x)## grows beyond all limits. The negative version is according.
Finding one value of ##x > 0## is not enough. By that definition the function ##e^x## would be unbounded as ##x \rightarrow 0##.
 
  • Like
Likes fresh_42
PeroK said:
That may be so, but it doesn't change the fact that this is not correct:

Indeed! That was more than just sloppy by me. I totally forgot the neighborhood.