Quick question about a telescoping series

AI Thread Summary
The discussion centers on the telescoping series \(\sum_{j=k}^{\infty}\left\{\frac{1}{b_{j}}-\frac{1}{b_{j+1}}\right\}=\frac{1}{b_{k}}\), which holds for all real monotone sequences \(b_j\). A partial sum is expressed as \(S_n=\frac{1}{b_k}-\frac{1}{b_{k+(n+1)}}\), leading to the conclusion that as \(n\) approaches infinity, \(\frac{1}{b_{k+(n+1)}}\) must approach zero. It was clarified that \(b_j\) does not need to be a specific sequence like \{1, 2, 3, ...\}, but rather must be strictly increasing for the identity to hold. The series can be evaluated without restrictions on monotonicity or the nature of the sequence. The key takeaway is that the identity remains valid as long as \(b_j\) is strictly increasing.
Townsend
Messages
232
Reaction score
0
\sum_{j=k}^{\infty}\left\{{\frac{1}{b_{j}}-\frac{1}{b_{j+1}}}\right\}=\frac{1}{b_{k}}

This series holds for all real monotone sequences of b_j.

So if I were to carry this series out to say n I end up with a partial sum that looks like:

S_n=\frac{1}{b_k}-\frac{1}{b_{k+(n+1)}}

Now as n goes to infinity we are left with just b_k. This of course implies that \frac{1}{b_{k+(n+1)}} goes to zero as n goes to infinity. So does this mean that the monotone sequence b_j must equal {1,2,3,4,5,...,j} ? If not what exactly are the constraints on b_j to make that series an identity?

Thanks for the help everyone.

JTB
 
Physics news on Phys.org
Never mind...I figured it out. As long as b_j is strictly increasing then the identity holds. The amount of jump between any two terms is irrelevant.

If there are any further comments please feel free to make them other wise I will let this thread die peacefully.

JTB
 
Townsend said:
\sum_{j=k}^{\infty}\left\{{\frac{1}{b_{j}}-\frac{1}{b_{j+1}}}\right\}=\frac{1}{b_{k}}

This series holds for all real monotone sequences of b_j.

Actually, it's a bit cleaner to talk about it as:
\sum_{i=k}^{\infty} \left(a_i-a_{i+1}\right)
Then any partial sum can easily be evaluated:
\sum_{i=k}^{n} \left(a_i-a_{i+1}\right) = a_k-a_{n+1}
so we have
\lim_{n \rightarrow \infty} \sum_{i=k}^{n} \left(a_i-a_{i+1}\right) = \lim_{n \rightarrow \infty} a_k-a_{n+1}=a_k - \lim_{n \rightarrow \infty} a_{n+1}

There's no need to restrict the series to being monotone or real.
 
I multiplied the values first without the error limit. Got 19.38. rounded it off to 2 significant figures since the given data has 2 significant figures. So = 19. For error I used the above formula. It comes out about 1.48. Now my question is. Should I write the answer as 19±1.5 (rounding 1.48 to 2 significant figures) OR should I write it as 19±1. So in short, should the error have same number of significant figures as the mean value or should it have the same number of decimal places as...
Thread 'A cylinder connected to a hanging mass'
Let's declare that for the cylinder, mass = M = 10 kg Radius = R = 4 m For the wall and the floor, Friction coeff = ##\mu## = 0.5 For the hanging mass, mass = m = 11 kg First, we divide the force according to their respective plane (x and y thing, correct me if I'm wrong) and according to which, cylinder or the hanging mass, they're working on. Force on the hanging mass $$mg - T = ma$$ Force(Cylinder) on y $$N_f + f_w - Mg = 0$$ Force(Cylinder) on x $$T + f_f - N_w = Ma$$ There's also...
Back
Top