I A question regarding the ratio test for limits

AI Thread Summary
The discussion centers on the ratio test theorem, which states that if a sequence \( a_n > 0 \) and \( \lim_{n\to \infty} a_{n+1}/a_n = L \), then \( \lim_{n\to \infty} a_n^{1/n} = L \). A key point raised is the confusion regarding the choice of \( \epsilon \) in the proof, particularly when \( \epsilon > L \), leading to potential issues with negative values when taking roots. It is clarified that the definition of a limit requires consideration of all positive \( \epsilon \), but typically, \( \epsilon \) is chosen to be less than \( L \) for practical purposes. The conversation emphasizes the importance of understanding that limits can still hold true even when larger \( \epsilon \) values are considered, as they do not affect the convergence of the sequence. The discussion concludes with a suggestion to revisit foundational concepts in limits for clarity.
MathematicalPhysicist
Science Advisor
Gold Member
Messages
4,662
Reaction score
372
So we have the theorem:
if ##a_n>0## and ##\lim_{n\to \infty} a_{n+1}/a_n = L## then ##\lim_{n\to \infty} a_n^{1/n}=L##.

Now, the proof that I had seen for ##L\ne0## that we choose ##\epsilon<L##.

But what about the case of ##\epsilon>L##, in which case we have:
##a_{n+1}>(L-\epsilon)a_n## but the last RHS is negative, so I cannot take the n-th root without going into problems of an n-th root of a negative number, which is not defined for even n's in the real line.

I read this solution from Albert Blank's solutions to Fritz John and Richard Courant's textbook.
 
Mathematics news on Phys.org
MathematicalPhysicist said:
So we have the theorem:
if ##a_n>0## and ##\lim_{n\to \infty} a_{n+1}/a_n = L## then ##\lim_{n\to \infty} a_n^{1/n}=L##.

Now, the proof that I had seen for ##L\ne0## that we choose ##\epsilon<L##.

But what about the case of ##\epsilon>L##, in which case we have:
##a_{n+1}>(L-\epsilon)a_n## but the last RHS is negative, so I cannot take the n-th root without going into problems of an n-th root of a negative number, which is not defined for even n's in the real line.

I read this solution from Albert Blank's solutions to Fritz John and Richard Courant's textbook.

If ##L>0##, you can always choose ##\epsilon>0## such that ##0<\epsilon <L##. What exactly is the problem here?
 
@Math_QED the definition of a limit is ##\lim_{n\to \infty}b_n=L \Leftrightarrow \forall \epsilon >0 \exists N(\epsilon) \in \mathbb{N} (n>N(\epsilon) \rightarrow |b_n-L|<\epsilon##.

Then for the definition of limit I need to show that limit also applies for ##\epsilon>L##, since the definition requires that the statement ##n>N(\epsilon)\rightarrow |b_n-L|<\epsilon## will be true for every positive epsilons not only those that are less than ##L##.
And I don't see why does this follow here?

Perhaps I am confused.
 
MathematicalPhysicist said:
@Math_QED the definition of a limit is ##\lim_{n\to \infty}b_n=L \Leftrightarrow \forall \epsilon >0 \exists N(\epsilon) \in \mathbb{N} (n>N(\epsilon) \rightarrow |b_n-L|<\epsilon##.

Then for the definition of limit I need to show that limit also applies for ##\epsilon>L##, since the definition requires that the statement ##n>N(\epsilon)\rightarrow |b_n-L|<\epsilon## will be true for every positive epsilons not only those that are less than ##L##.
And I don't see why does this follow here?

Perhaps I am confused.
Maybe so. The definition says, in part, "for any positive ##\epsilon##", but you want to show that for reasonably large n, that ##b_n## and L are only a small distance apart. The concept here is that no matter how close together someone else requires these two numbers to be, you can find a number n that forces ##b_n## and L to be that close. There is no reason for someone to choose ##\epsilon## to be large; i.e., larger than L.

Here's an example. Let ##b_n = \frac 1 2, \frac 2 3, \frac 3 4, \dots, \frac n {n + 1}, \dots##. The limit of this sequence clearly is 1. If someone else chooses ##\epsilon = 2##, how far along in the seqence do you need to go so that ##|b_n - 1| < 2##? If they want to make you work, they will choose a much smaller value for ##\epsilon##.
 
  • Like
Likes MathematicalPhysicist
Mark44 said:
Maybe so. The definition says, in part, "for any positive ##\epsilon##", but you want to show that for reasonably large n, that ##b_n## and L are only a small distance apart. The concept here is that no matter how close together someone else requires these two numbers to be, you can find a number n that forces ##b_n## and L to be that close. There is no reason for someone to choose ##\epsilon## to be large; i.e., larger than L.

Here's an example. Let ##b_n = \frac 1 2, \frac 2 3, \frac 3 4, \dots, \frac n {n + 1}, \dots##. The limit of this sequence clearly is 1. If someone else chooses ##\epsilon = 2##, how far along in the seqence do you need to go so that ##|b_n - 1| < 2##? If they want to make you work, they will choose a much smaller value for ##\epsilon##.
I need to relearn stuff that I have forgotten.
 
MathematicalPhysicist said:
@Math_QED the definition of a limit is ##\lim_{n\to \infty}b_n=L \Leftrightarrow \forall \epsilon >0 \exists N(\epsilon) \in \mathbb{N} (n>N(\epsilon) \rightarrow |b_n-L|<\epsilon##.

Then for the definition of limit I need to show that limit also applies for ##\epsilon>L##, since the definition requires that the statement ##n>N(\epsilon)\rightarrow |b_n-L|<\epsilon## will be true for every positive epsilons not only those that are less than ##L##.
And I don't see why does this follow here?

Perhaps I am confused.

Show that this definition is equivalent with the definition:

##\forall \epsilon \in (0,k): \exists N: \dots##

where ##k>0## is some fixed constant.
 
Thread 'Video on imaginary numbers and some queries'
Hi, I was watching the following video. I found some points confusing. Could you please help me to understand the gaps? Thanks, in advance! Question 1: Around 4:22, the video says the following. So for those mathematicians, negative numbers didn't exist. You could subtract, that is find the difference between two positive quantities, but you couldn't have a negative answer or negative coefficients. Mathematicians were so averse to negative numbers that there was no single quadratic...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Thread 'Unit Circle Double Angle Derivations'
Here I made a terrible mistake of assuming this to be an equilateral triangle and set 2sinx=1 => x=pi/6. Although this did derive the double angle formulas it also led into a terrible mess trying to find all the combinations of sides. I must have been tired and just assumed 6x=180 and 2sinx=1. By that time, I was so mindset that I nearly scolded a person for even saying 90-x. I wonder if this is a case of biased observation that seeks to dis credit me like Jesus of Nazareth since in reality...

Similar threads

Back
Top