Ramanujan Summation and ways to sum ordinarily divergent series

In summary, the inverse problem is to guess what problem an infinite series was trying to solve and solve it that way.
  • #1
10,774
3,634
Hi All

Been investigating lately ways to sum ordinarily divergent series. Looked into Cesaro and Abel summation, but since if a series is Abel Mable it is also Cesaro sumable, but no, conversely,haven't worried about Cesaro Summation. Noticed Abel summation is really a regularization technique similar to regularization in re-normalization then you let x in the regulator x^n go to 1.

It all looked good until you looked art series like 1+1+1+1 ... or 1+2+3+4 etc ie the terms are not 'oscillating' like say 1-2+3-4 etc - then it fails. Of course zeta function summation works and is related to re-normalization as worked out by Hawking:
https://projecteuclid.org/euclid.cmp/1103900982

There are hand-wavy ways to use Abel summation to handle 1+2+3... - but it's a bit iffy IMHO as explained in a heap of places on the internet eg (the following has been, correctly criticized a lot))


But even if you accept it, it fails miserably for 1+1+1+1...

Ok you can use analytic continuation on the zeta function - but how did Ramanujan do it? I started looking into that. Amazingly I found a good video on it:


The answer is easy actually - its the constant term in the Euler-MacLaurin summation formula. Well I will be stonkered - its that easy.

If you would like the full detail see (note to other moderators is its a copy the author make freely available of a textbook he wrote on it so meets our standards for a reference):
https://hal.univ-cotedazur.fr/hal-01150208v2/document

Thanks
Bill
 
Last edited:
  • Like
Likes baldbrain
Physics news on Phys.org
  • #2
All these specific examples - but this post is in the mathematics section. We should first ask "What's a good mathematical definition for "a summation method"? What are its defining properties? Open the mind and mouth wide enough to swallow a grand generality.
 
  • #3
The way I think of the various summation techniques is that they aren't summation techniques at all but a kind of inverse problem.
  • You have some mathematical problem, to calculate some real-number quantity.
  • There is a straight-forward, naive technique to try to solve the problem using infinite series, which would give the answer if the series converged.
  • Unfortunately, the series doesn't converge.
  • However, there is also an alternative way to solve the problem that doesn't use infinite series.
  • The alternative technique gives a computable answer to the original problem.
  • So you simply declare the value of the infinite series to be equal to the answer computed using the alternative technique.
So the inverse problem is this:
  • You're given a divergent series.
  • You guess a problem that it might be the naive attempt to solve.
  • Then you solve that problem in an alternative way.
There is no reason to suppose that this inverse problem has a unique answer, though. So I think it's incorrect to speak of the value of an infinite series. Instead, you're basically taking the infinite series, which doesn't actually have a value and trying to guess what problem it was an attempt to solve, and solving that problem, instead. But there is no a priori reason that two different problems might lead to the same divergent series.

For example, if you have the infinite series 1+4+8+16+..., you might guess that it was a naive attempt to evaluate the function ##\frac{1}{1-x}## at the point ##x=2## by doing a series expansion in ##x## and then plugging in ##2## for ##x##. Similarly, the series 1+2+3+4+... can be thought of as the naive attempt to use the infinite series for ##\zeta(n)## and then evaluating it at ##n=-1##. The series doesn't converge for this value of ##n##, but there are other ways to calculate ##\zeta(-1)##.

I don't know of a series that has two plausible interpretations as naive solution attempts that lead to different answers, but I can't see why the solutions should be unique.
 
  • #4
We could say that, in general, a "real number summation technique" ##S## is a mapping from some subset of the set of sequences of real numbers into the real numbers. No need to restrict it to mappings that give the "correct" answers for finite sequences. (After all, we might enjoy spawning pop-sci articles with headlines like "Mathematicans show 1+1 = 73.8".)

The tricky part is defining further properties that a summation technique should have. For example, we might want ##S((a_1,a_2,a_3,...)) = S( (a_1,a_2,a_3)) + S((a_4,a_5,...))##.
 
  • #5
Obviously, the inverse problem that I talk about doesn't give you a unique solution, in a sense. If you see the series 1+1+1+..., it might be a naive attempt to calculate ##\zeta(0)##, in which case the answer is -1/2. Or it might be a naive attempt to calculate ##\frac{1}{1-x}##, evaluated at ##x=1##. In that case, the answer is ##\infty##. But the real interesting case would be to have two different finite answers that can both be justified in this inverse way. I don't know of an example.
 
  • #6
Stephen Tashi said:
All these specific examples - but this post is in the mathematics section. We should first ask "What's a good mathematical definition for "a summation method"? What are its defining properties? Open the mind and mouth wide enough to swallow a grand generality.

The last link I gave gives the full mathematical detail - not specific examples.

What went before was just a motivating pre-amble.

But I stated it's core - its simply the constant part (ie independent of n in the sequence a1 + a2 + a3 ... + an). It's just an application of the Euler-MacLaurin summation formula. Many proofs of this interesting result can be found - the link I gave gives a full proof - plus quite a bit more.

Ramanujan's answer to your question is, with input from Hardy, the C(a) defined in the second video ie C(a) = (0 to a)∫f(x) + (k 1 to ∞)∑ (Bk/k!)*-f(k-1)(0). Here f(k-1)(0) is the k-1th deriviative evaluated 0. Now if C(∞) exists the series is convergent in the usual sense and the sum is C(∞) - otherwise by definition the infinite sum is C(0). Interestingly it produces the same answer for series I know as zeta function summation - might be some interesting theorems about why this is true and/or under what conditions it holds.

Thanks
Bill
 
  • #7
bhobba said:
The last link I gave gives the full mathematical detail - not specific examples.
Where, in that link, is general definition given for a summation technique?
 
  • #8
stevendaryl said:
I don't know of a series that has two plausible interpretations as naive solution attempts that lead to different answers, but I can't see why the solutions should be unique.

Hardy in his book, which I have had a look at, but didn't study, has some theorems about this called Tauberian theorems. Here is a link about such things in more modern mathematical language like topological spaces:
https://www3.nd.edu/~lnicolae/Enyart.pdf

I like Hardy and his 'chatty' style, but it's a bit dated.

Thanks
Bill
 
  • #9
Stephen Tashi said:
Where, in that link, is general definition given for a summation technique?

Its the last link I gave:
https://hal.univ-cotedazur.fr/hal-01150208v2/document

I think its an earlier version of the following text the author, like some authors are kind enough to make available, of:
https://www.amazon.com/dp/3319636294/?tag=pfamazon01-20

And yes a general definition is given, and much more, such as the big problem of what is called stability I will let people investigate - there seems to be a fair amount of info about it on the internet - and of course Hardy looks at it.

Thanks
Bill
 
  • #10
Here's the general idea of summing a divergent series as a kind of "inverse problem":
  1. You start with a divergent sum: ##\sum_{n=1}^\infty a_n##.
  2. You guess a parametrized family of analytic functions: ##f_n(s)## with the property that ##lim_{s \rightarrow 1} f_n(s) = a_n##
  3. You find an analytic function ##F(s)## such that in some region of the complex plane, ##\sum_{n=1}^\infty f_n(s)## converges to ##F(s)##.
  4. You analytically continue the function ##F(s)## to find its value of ##F(1)##.
  5. You define ##\sum_{n=1}^\infty a_n## to be ##F(1)##.
That would seem to cover Abel summation, Zeta regularization and Dirichlet regularization.

This article gives an answer to my question: two different ways to sum divergent series which give different finite answers: https://math.stackexchange.com/questions/2619740/zeta-regularization-vs-dirichlet-series
 
  • #11
  • #12
  • #13
stevendaryl said:
Two things you might want in a summation method:

  1. Stability: ##\sum_{n=0}^{\infty} a_n = a_0 + \sum_{n=0}^{\infty} a_{n+1}##
  2. Linearity: ##\sum_{n=0}^\infty a_n + c \sum_{n=0}^\infty b_n = \sum_{n=0}^\infty (a_n + c b_n)## (where ##c## is any real number)

That's pleasing, but from the viewpoint of pure mathematics, one should define "a summation method" in general, if those are to be defined a special properties of nice summation methods.

Also the notation ##\sum_{n=k}^{\infty} a_n## for a summation method is misleading since it implies ordinary addition. The general idea of a summation method is to map sequences of numbers to real numbers. So rather than using ##\sum_{n=k}^{\infty} a_n## ambiguously it would be clearer to use notation that conveyed this. Perhaps ##\sum_S( \{a\} ) ## for summation method "##S##" evaluated on sequence "##\{a\}##". (I don't see the point of always having an "##\infty##" present in the notation. Don't we want summation methods to apply to finite sequences too?
 
  • #14
Stephen Tashi said:
That's pleasing, but from the viewpoint of pure mathematics, one should define "a summation method" in general, if those are to be defined a special properties of nice summation methods.

Also the notation ##\sum_{n=k}^{\infty} a_n## for a summation method is misleading since it implies ordinary addition. The general idea of a summation method is to map sequences of numbers to real numbers. So rather than using ##\sum_{n=k}^{\infty} a_n## ambiguously it would be clearer to use notation that conveyed this. Perhaps ##\sum_S( \{a\} ) ## for summation method "##S##" evaluated on sequence "##\{a\}##". (I don't see the point of always having an "##\infty##" present in the notation. Don't we want summation methods to apply to finite sequences too?

I guess it's personal choice, but to me, I don't think I would call something a "summation method" if it didn't reduce to the usual notion of summation for finite and convergent series.
 
  • #15
stevendaryl said:
I guess it's personal choice, but to me, I don't think I would call something a "summation method" if it didn't reduce to the usual notion of summation for finite and convergent series.

But pure mathematicans are always happy to say things like "{0} is vector space" or "{0} is an additive group". So they would like to say "The function S that maps all sequences to 0 is a summation method". :wink:

Perhaps a way to dissuade them would be to begin by talking about a "measure" on the set of sequences of real numbers and define your concept of a summation method as a particular sort of measure. However, there is the difficulty that the properties of a measure are defined in terms of set unions and intersections and the concept of a union of two infinite sequences is , as yet, undefined - at least with respect to producing another infinite sequence.
 
  • #16
stevendaryl said:
Here's the general idea of summing a divergent series as a kind of "inverse problem":
  1. You start with a divergent sum: ##\sum_{n=1}^\infty a_n##.
  2. You guess a parametrized family of analytic functions: ##f_n(s)## with the property that ##lim_{s \rightarrow 1} f_n(s) = a_n##
  3. You find an analytic function ##F(s)## such that in some region of the complex plane, ##\sum_{n=1}^\infty f_n(s)## converges to ##F(s)##.
  4. You analytically continue the function ##F(s)## to find its value of ##F(1)##.
  5. You define ##\sum_{n=1}^\infty a_n## to be ##F(1)##.

The quantification in that definition isn't completely clear. Do we start with an arbitrary divergent sum and find a family of functions f_n that works (perhaps) only for that particular divergent sum? - or must the same family of functions work for whatever divergent sum might have been selected? It seems the condition ##\lim_{S \rightarrow 1} f_n(S) = a_n## says the family of functions works only for one particular divergent sequence.

That would seem to cover Abel summation, Zeta regularization and Dirichlet regularization.

To apply your definition, we'd have to say "We are doing Abel summation when we pick ##f_n## in ... such and such ... way". How do we say that precisely?
 
  • #17
Stephen Tashi said:
The quantification in that definition isn't completely clear. Do we start with an arbitrary divergent sum and find a family of functions f_n that works (perhaps) only for that particular divergent sum? - or must the same family of functions work for whatever divergent sum might have been selected? It seems the condition ##\lim_{S \rightarrow 1} f_n(S) = a_n## says the family of functions works only for one particular divergent sequence.

To apply your definition, we'd have to say "We are doing Abel summation when we pick ##f_n## in ... such and such ... way". How do we say that precisely?

I don't know--something like this:

A summation technique is a map from infinite sequences of reals, ##a_n## to an infinite sequence of analytic functions ##f_n(s)## with the special property that ##lim_{s \rightarrow 1} f_n(s) = a_n##.

Then we say that a sequence ##a_n## is summable by the summation technique if
  1. There is an open set ##S## in the complex plane.
  2. There is an analytic function ##F(s)## that is defined on that set.
  3. For every point ##s## in that set, ##\sum_n f_n(s)## converges to ##F(s)##.
  4. There is a unique analytic continuation of ##F(s)## to a function that is defined when ##s=1##
In that case, we say that the value of the formal series ##\sum_n a_n## under that summation technique is ##F(1)##

So in answer to your question, every summation technique would have some set of series that are summable by it. My definition doesn't preclude the possibility that a technique might only be useful for one specific divergent series.
 
  • #18
Stephen Tashi said:
Are you talking about the link https://hal.univ-cotedazur.fr/hal-01150208v2/document ? I don't see the word "stability" mentioned in that document.

No it doesn't (sorry for not being clear about that source not discussing it) - its only about Ramanujan Summation which allows the addition of sums that are not stable. In 1+1+1+1... = 1+ (1+1+1+1...) which should equal 1 -1/2 so is not stable. Its one reason why it and Zeta function summation are more powerful than Abel summation which is stable.

An internet search will bring back quite a bit of information on stability.

Thanks
Bill
 
  • #19
stevendaryl said:
So in answer to your question, every summation technique would have some set of series that are summable by it. My definition doesn't preclude the possibility that a technique might only be useful for one specific divergent series.

Of course. That's why there are 'Tauberian theorems' about this stuff eg (and no it doesn't discuss Ramanujan summation from a quick scan):
https://carma.newcastle.edu.au/jon/tauber.pdf

But it does seem to be an active area of mathematical investigation.

Thanks
Bill
 
Last edited:
  • #20
bhobba said:
Hi All

Been investigating lately ways to sum ordinarily divergent series

(...)

Ok you can use analytic continuation on the zeta function - but how did Ramanujan do it? I started looking into that. Amazingly I found a good video on it:

== YOUTUBE VIDEO ==

The answer is easy actually - its the constant term in the Euler-MacLaurin summation formula. Well I will be stonkered - its that easy.

(...)

Thanks
Bill

That's something I've been wondering about for a while... Ramanujan famously saying "Wait, I can explain everything, don't send for the guys in white coats! Just work with me here, and I'll show you why -1/12 makes sense".

From this, it's obvious that he was not aware of the work of Euler and others who had already arrived at -1/12.

But once he had the chance to explain, how did he justify saying that the sum is "equal" to 1/12? Ramanujan didn't belong to the definition-oriented school where you are free to define things like "=" in new and creative ways (as long as you build a logically consistant structure). So how did he "motivate" this idea of discarding the integral term and pretending that the constant term is all there is in the sum on the right hand side? Why not just say that there is an interesting, meaningful and useful way to partition the infinite sum into an integral plus a constant?

If I understand correctly, analytic continuation is really the only way to justify ignoring the integral and pretending that -1/12 is all there is. And Ramanujan didn't know about analytic continuation at that point, else why would he talk about lunatic asylums?
 
Last edited:
  • #21
He used the Euler-Maclaurin formula to write infinite sums in a different way. This led to a constant that for ordinary series is the sum. He simply considered the constant as the general sum.

It was just his intuition that led him to do it - he of course was not into formal math.

Hardy always cautioned using Ramnujan Summation.

Thanks
Bill
 
  • #22
Stephen Tashi said:
But pure mathematicans are always happy to say things like "{0} is vector space" or "{0} is an additive group". So they would like to say "The function S that maps all sequences to 0 is a summation method". :wink:

Perhaps a way to dissuade them would be to begin by talking about a "measure" on the set of sequences of real numbers and define your concept of a summation method as a particular sort of measure. However, there is the difficulty that the properties of a measure are defined in terms of set unions and intersections and the concept of a union of two infinite sequences is , as yet, undefined - at least with respect to producing another infinite sequence.
Konrad Knopp's classic book on infinite series and infinite products has a very good discussion of different "summability" methods, as well as some historical discussion of the first attempts to handle such problems.
 
  • Like
Likes bhobba
  • #23
Since Borel seems not to have been mentioned yet...

I was under the impression that Borel summation is the most powerful. I.e., Borel dominates Abel, which dominates Cesaro.

Or is that outside the intended scope of this thread?
 
  • #24
strangerep said:
Since Borel seems not to have been mentioned yet...

It is not the most powerful - but still is very powerful. I did a post explaining it in another thread. Need to head off now for physio but will elaborate when I get back.

Ok after being physically tortured I think this sheds light on divergent sums in general and Ramanujan Summation as another example of what is going on so to speak. IMHO this makes it on scope for this thread.

So let's go into Borel Summation.

To detail about Borel summation ∑an = ∑(an/n!)*n!. n! = Γ(n+1) = ∫t^n*e^-t so ∑an = ∑ ∫an*t^n*e^-t/n!. In general you can't interchange the sum and integral but under some conditions you can so we will formally interchange them and see what happens. So ∑an = ∫∑(an/n!)*t^n*e^-t. This is called the Borel sum and we will look at when it is the same as ∑an.

If ∑an is absolutely convergent by Fubini's theorem the sum and integral can be reversed and the Borel sum is the same as the normal sum. Consider the series S =1 + x + x^2 + x^3 ... It is absolutely convergent to 1/1-x, |x| < 1. Then the Borel Sum is S = ∫∑(x^n/n!)*t^n*e^-t = ∫e^t(x-1) = 1/1-x < ∞ if |x|<∞ - not only for |x|<1. In other words - when S is convergent in the usual sense then it is equal to its Borel Sum but S is only valid for |x| < 1, however the Borel Sum is still the same - but true for more values of x. Borel summation has extended the values of x you get a sensible answer - in fact exactly the same answer. This is the characteristic of analytic continuation ie if a function in a smaller region it is exactly the same function in a larger region. Borel summation has extended the region the series has a finite sum. Normal summation introduces unnecessary restrictions on the sum that Boral Summation removes - at least in part. This of course works for similar series like 1+ 2x + 3x^2 +4x^3 ... and is left as an exercise to show its Borel and Normal sum are the same ie 1/(1-x)^2.

There is also another way of looking at this by introducing what's called the Boral exponential sum. Personally I don't use it much but sometimes its of some use. It is defined as limit t → ∞ e^-t*S(t). Here Sn = a0 + a1 +a2 +...+an so limit n → ∞ Sn = S, and S(t) = ∑Sn*t^n/n!. Note for each single term in the sum limit t → ∞ e^-t*Sn*t^n/n! = 0. Using that its not too hard (but a bit tricky and not totally rigorous) to see if Σan converges normally to S then its exponential sum also converges to S. We divide the sum into two parts - the sum taken to N very large and the rest. But while large, since N is finite each term is zero in the finite part of the sum. But since N is large Sn in the rest of the sum can be taken as S so FAPP we have the infinite part of the sum as S*limit t → ∞ e^-t*∑t^n/n! with the sum being N+1 to infinity. However since the previous terms before N+1 are all zero the sum can be taken from 0 to infinity to give S*limit t → ∞ e^-t*e^t = S. Thus if ∑an converges in the usual sense to S then S is also the exponential sum.

Now we will show something interesting - if the exponential sum S exists then the Borel Sum exists and is also S. However the reverse is not true.

Let B(t) = ∑an*t^n/n! = a0 + (S1-S0)*t + (S2-S1)*t^2/2! + (S3-S2)*t^3/3! ++++++. Hence B(t)' = S(t)' - S(t) .

S - a0 = e^-t*S(t)| from 0 to ∞ = ∫d/dt [e^-t*S(t)] = ∫e^-t*(S(t)' - S(t)) = ∫B(t)'*e^-t = e^-t*B(t)| (o to ∞) + ∫e^-t*B(t) = -a0 + ∫e^-t*B(t). On cancelling a0 we end up with what was claimed S = ∫e^-t*B(t) which is the Borel Sum. We also have shown if ∑an normally converges to S then the Borel Sum is also S.

What this is saying is the Boral Sum is exactly the same for normally convergent sums. But if it's not normally convergent it can still give an answer. Not only this but if ∑an*x^n has any non zero radius of convergence the Borel Sum is exactly the same as the normal sum in the radius of convergence. It is an analytic continuation for all x. Analytic continuation is simply removing an unnatural restriction in the way the sum is written. So, one way of viewing Borel summation is simply removing an unnatural restriction in the way a series is written so it be expressed in a more natural way.

Looking at it this way we see that summing divergent series is simply analytic continuation of a function that is written in a restrictive form - analytic continuation allowing us to find the function that has not been restricted.

How does this work with Ramanujan summation? Well let's have a function, say the zeta function that depends on a parameter s. It is not hard to calculate the Ramanujan Sum for the zeta function. You get C(s) +Rn(s). If s >1 then Rn goes to zero and everything is fine. But if not ie Rn does not converge to zero, then we know the function for for other s - it must be the same by analytic continuation. So C(s) is the analytic continuation of the Zeta function for s<1 since it is the function when s >1. Its not a rigorous argument but I am sure can be made rigorous eg we need to show C(s) is analytical.

So what is the most powerful. Well Ramanujan summation will sum the zeta function - Borel will not.

BTW there is a lot of argument if 1+2+3+4... really does equal -1/2. I had a lot of trouble with that one, and my view has changed a bit over time. Now it for me is really quite simple - the integers can be interpreted as being part of the real line or complex plane. If you consider it just part of the real line then it can't be summed for |x| > = 1. For the complex plane you have analytic continuation and it can be done. So it enirely how you look at the problem.

I could say a bit about this and renormalization but will limit myself to only one comment. Sometimes it is said that renormalization is another trickier way of taking the limit in the infinite sum. Divergent sums are simply another way, namely considering the terms in the complex plane, of taking the limit in the infinite sum. In fact Hawking showed that zeta function summation and dimensional regularization are basically equivalent. Now that is interesting.

Thanks
Bill
 
Last edited:
  • #25
bhobba said:
Been investigating lately ways to sum ordinarily divergent series.

( ... )

Ok you can use analytic continuation on the zeta function - but how did Ramanujan do it? I started looking into that. Amazingly I found a good video on it:
<<VIDEO>>
The answer is easy actually - its the constant term in the Euler-MacLaurin summation formula.
At 03:48 in the second video in the OP, we have what the presenter calls the "remainder" term R, which he says "is very small". (Here is a capture from the video).

1573557373381.png


Comparing this with this other source, https://hal.univ-cotedazur.fr/hal-01150208v2/document I'd just like to confirm that if the summation over k goes to infinity (as it does in the above version) then we don't actually need R at all. So is it true that R is an error term that is non-zero only if we truncate the summation terms over k at some k = p?
 
  • #26
Swamp Thing said:
I'd just like to confirm that if the summation over k goes to infinity (as it does in the above version) then we don't actually need R at all. So is it true that R is an error term that is non-zero only if we truncate the summation terms over k at some k = p?

Actually using a non-rigorous derivation the R doesn't even appear.


Note here the sum is from 0 to n-1.

To make the above rigorous, should that appeal, see:
http://www.kurims.kyoto-u.ac.jp/~kyodo/kokyuroku/contents/pdf/1155-7.pdf
But I personally would not worry. Arguments like that are used in physics and applied math all the time - if you get too caught up in it you will find it takes up your time for no gain in using it to solve problems. But sometimes you just can't resist - I now that feeling only too well.

Have fun.

Thanks
Bill
 
Last edited:
  • #27
Just want to add this link, which covers Cesaro summation and analytic continuation (the latter, a bit simplified):
 
  • #28
PAllen said:
Just want to add this link, which covers Cesaro summation and analytic continuation (the latter, a bit simplified):

Because it depends on what " is" is. It does not mean standard convergence. I know you know this but I think most of those who watch the video don't.
 
  • #29
WWGD said:
Because it depends on what " is" is. It does not mean standard convergence. I know you know this but I think most of those who watch the video don't.
Well this video, unlike the numberphile video it debunks, is very clear on the distinctions between different types of summation. It actually does not introduce Ramanujan summation. Instead it covers analytic continuation, and the computation of the zeta function in terms of the eta function.
 
  • Like
Likes bhobba
  • #30
PAllen said:
Well this video, unlike the numberphile video it debunks, is very clear on the distinctions between different types of summation. It actually does not introduce Ramanujan summation. Instead it covers analytic continuation, and the computation of the zeta function in terms of the eta function.
Ah, my bad for being lazy and making unwarranted assumptions.
 
  • #31
PAllen said:
Well this video, unlike the numberphile video it debunks, is very clear on the distinctions between different types of summation. It actually does not introduce Ramanujan summation. Instead it covers analytic continuation, and the computation of the zeta function in terms of the eta function.

Exactly. Its one of the better ones around that makes it clear its simply how we define infinite summation - and analytic continuation, going to the complex plane, is a very natural way of extending it. Is why nearly all the guff you find on it posted on the internet is wrong. An interesting exercise for the advanced that sheds further light on it, is its relation to the Hahn-Banach theorem. Just as a start on that journey:
http://oak.conncoll.edu/cnham/Slides6.pdf

The reason Ramanujan Summation works for summing divergent series is, as mentioned in the rather good Math-lodger video, analytic continuation. Taking the Zeta function as an example it is fine for s >1, the C Ramanujan defines as the Ramanjuan sum is the same as the usual sum. But for other values the sum is divergent in the usual sense, but C still exists, and by analytic continuation must be the same as other methods, providing it is analytic, which the Ramanujan sum is. The Hahn- Banach theorem approach provides another interesting way of looking at it.

Thanks
Bill
 
  • Like
Likes WWGD
  • #32
bhobba said:
Exactly. Its one of the better ones around that makes it clear its simply how we define infinite summation - and analytic continuation, going to the complex plane, is a very natural way of extending it. Is why nearly all the guff you find on it posted on the internet is wrong. An interesting exercise for the advanced that sheds further light on it, is its relation to the Hahn-Banach theorem. Just as a start on that journey:
http://oak.conncoll.edu/cnham/Slides6.pdf

The reason Ramanujan Summation works for summing divergent series is, as mentioned in the rather good Math-lodger video, analytic continuation. Taking the Zeta function as an example it is fine for s >1, the C Ramanujan defines as the Ramanjuan sum is the same as the usual sum. But for other values the sum is divergent in the usual sense, but C still exists, and by analytic continuation must be the same as other methods, providing it is analytic, which the Ramanujan sum is. The Hahn- Banach theorem approach provides another interesting way of looking at it.

Thanks
Bill
Thank you, will look into it but it seems a bit confusing in that Hahn Banach is used to extend linear/sublinear maps from subspaces into the "host" superspace but I don't see how this applies to Taylor series which are not linear.
 
  • #33
WWGD said:
Thank you, will look into it but it seems a bit confusing in that Hahn Banach is used to extend linear/sublinear maps from subspaces into the "host" superspace but I don't see how this applies to Taylor series which are not linear.

I will not comment on Ramanujan Summation itself, but regarding Hahn Banach: In certain cases, the theorem can be used to extend multilinear forms as well. Given that the ##n##th term of a Taylor series (of a function defined on an open subset of a Banach space) is an ##n##-linear form (actually evaluated at ##n## times the same argument) I could imagine that Hahn-Banach has its uses there.
 
  • Like
Likes bhobba
  • #34
bhobba said:
The reason Ramanujan Summation works for summing divergent series is, as mentioned in the rather good Math-lodger video, analytic continuation.
Perhaps you have seen this blog post by Terence Tao. If you haven't, you'll find it interesting because Tao's aim here is to derive some stuff independently of analytic continuation -- things that are formally considered accessible only by stepping off the real line and wandering around the complex plane.
https://terrytao.wordpress.com/2010...tion-and-real-variable-analytic-continuation/
 
  • Like
Likes bhobba and PAllen
  • #35
bhobba said:
To make the (use of symbolic manipulations of differential operator D) rigorous, should that appeal, see:
http://www.kurims.kyoto-u.ac.jp/~kyodo/kokyuroku/contents/pdf/1155-7.pdf
In my engineering math courses, we had to get used to the idea that you could write polynomials in D, and evn transcendental functions of D like ##e^{Dh}##. At that time our priority was to pass the exams and get on with life, so no one spared any time to wonder how this actually works.

Now that I'm retired, I can afford to spend some time down these rabbit holes, purely as a hobby. Unfortunately, this article seems to be quite a bit beyond my grasp because it demands a certain level of understanding of abstract math.

I'm wondering how feasible the following approach would be, at least as a for-dummies picture:
Although "D" is not a number, it does take a function f(x) and give you f'(x), so in a sense we can think of D as a sort of number that represents the local value of f'(x)/f(x). If we plug that local ratio into a power series, then ##e^{Dh}## sort of makes sense. How far can we get if we try to run with this ball?

Edit:
Oops, the first problem is that ##D^2## is not necessarily the square of f'(x)/f(x).
 
Last edited:
  • Like
Likes bhobba

Similar threads

  • General Math
Replies
3
Views
1K
  • Quantum Physics
Replies
5
Views
2K
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
2K
  • Calculus and Beyond Homework Help
Replies
11
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
11
Views
2K
Replies
6
Views
1K
Back
Top