Reading through a real analysis text book I noticed that [itex]\sum[/itex] 1/K diverges but [itex]\sum[/itex] 1/K1+[itex]\epsilon[/itex] converges for all [itex]\epsilon[/itex] > 0. This is confusing because 1/k will eventually be equally as small as the terms in 1/K1+[itex]\epsilon[/itex] and therefore it should also converge. It may take much longer but surely the numbers eventually become equally small and adding more on becomes pointless. I also saw that one can prove this using the integral test but the integral test isn't the 'decider' there must a more rigorous reason. Essentially what I'm asking is who or what mathematical concept determined the required rate for convergence because it seems arbitrary to me.