Is the theory of fractional-ordered calculus flawed?

  • #1
127
5
Let's talk about the function ##f(x)=x^n##.

It's derivative of ##k^{th}## order can be expressed by the formula:
$$\frac{d^k}{dx^k}=\frac{n!}{(n-k)!}x^{n-k}$$
Similarly, the ##k^{th}## integral (integral operator applied ##k## times) can be expressed as:
$$\frac{n!}{(n+k)!}x^{n+k}$$
According the the Wikipedia article https://en.wikipedia.org/wiki/Fractional_calculus, we can replace the factorial with the Gamma function to get derivatives of fractional order.

So, applying the derivative of half order twice to ##\frac{x^{n+1}}{n+1}+C##, should get us to ##x^n##.

Applying the half-ordered derivative once gives:
$$\frac{d^{1/2}}{{dx^{1/2}}}\left(\frac{x^{n+1}}{n+1}+Cx^0\right)=\frac{1}{n+1}\frac{\Pi(n+1)}{\Pi(n+1/2)}x^{n+1/2}+C\frac{1}{\Pi(-1/2)}x^{-1/2}$$
where ##\Pi(x)## is the generalization of the factorial function, and ##\Pi(x)=\Gamma(1+x)##.

Again, applying the half-ordered derivative gives:
$$\frac{1}{n+1}\frac{\Pi(n+1)}{\Pi(n)}x^n+\frac{C}{\Pi(-1)}x^{-1}=x^n$$
which works fine because ##\frac{C}{\Pi(-1)}\rightarrow 0##. So, the derivative works good but that's not the case with fractional-ordered integration.

Applying the half-ordered integral operator twice to ##x^n## should give us ##\frac{x^{n+1}}{n+1}+C##. Applying the half-ordered integral once means finding a function whose half-ordered derivative is ##x^n##. So, applying it once gives:
$$\frac{\Pi(n)}{\Pi{(n+1/2)}}x^{n+1/2}+C\frac{1}{\Pi(-1/2)}x^{-1/2}$$

Again, applying the half ordered derivative to this function should give a function whose half-ordered derivative is this function. So, again applying the half-integral operator gives:
$$\frac{x^{n+1}}{n+1}+C+C'\frac{1}{\Pi(-1/2)}x^{-1/2}\neq \frac{x^{n+1}}{n+1}+C$$
where ##C'## is another constant. So, why this additional term containing ##C'## gets introduced? Is the theory of fractional derivatives flawed? Is there any way to get a single constant ##C## in the end by applying the half-integral operator two times?
 
Last edited:

Answers and Replies

  • #2
127
5
I mistakenly posted it on General math. How to transfer it to the calculus forum?
 
  • #3
Stephen Tashi
Science Advisor
7,353
1,357
I mistakenly posted it on General math. How to transfer it to the calculus forum?
You can use the "report" feature to report your own original post. In the reasons for reporting, say "Posted in wrong section. Please move the thread to the Calculus section". (The report feature isn't exclusively for reports about posts that are inflammatory or scandalous etc.)
 
  • #4
pwsnafu
Science Advisor
1,080
85
Applying the half-ordered integral operator twice to ##x^n## should give us ##\frac{x^{n+1}}{n+1}+C##.
No it shouldn't. There should be no +C because fractional calculus is a theory of definite integrals.
 
  • #5
Stephen Tashi
Science Advisor
7,353
1,357
Applying the half-ordered integral operator twice to ##x^n## should give us ##\frac{x^{n+1}}{n+1}+C##.
That isn't one of the assumptions used in fractional calculus.

Do you consider it a flaw of ordinary calculus that applying anti-differentiation twice to ##x^1## fails to produce ##x^3/6## and instead produces ##x^3/(6) + C + C'x##?

The definition of "anti-differentiation" in ordinary calculus is often stated as process that does not produce a unique function. Instead it produces a family of functions. The process-oriented definition of "anti-differentiate ##f(x)##" is to state a family (i.e. a set) of functions ##\mathbb{F}## such that that each member ##\mathbb{F}## of the family has a derivative equal to ##f(x)##. For example the statement ##\int x \ dx= x^2/2 + C## amounts to defining ##\int x\ dx## as a family of functions whose distinct members are defined by picking different values of ##C##.

To define anti-differentiation as a specific "operator", we must define it so it produces a unique answer. (An operator is a function whose domain is a set of functions and whose co-domain is a set of functions. A function must map each element in the domain to a unique element in the co-domain). The usual way to define a unique anti-differentiation operator is to take a specific "reference point" (also called a "fiducial point" by some authors). For example, taking the value ##a ## as the reference point, we can define the anti-derivative operator ##D_a^{-1} f(x)## as ##D_a^{-1} f(x) = \int_a^x f(x) dx##. That definition leaves no "undetermined constant" in the result because ##F(x) = D_a^{-1} f(x) ## implies ##F(a) = 0## so we have a "boundary value" that determines any constant that temporarily appears using the usual rules for the process of anti-differentiation. (i.e. ##D_a^{-1} f(x) = F(x) - F(a)## so the specific constant is ##-F(a)## and ##F(x)## is a specific function, not a family of functions. )

For example ##D_6^{-1} x = \int_6^x x\ dx = x^2/2 - 36/2 ##.

The usual approach in the fractional calculus is to use anti-derivative operators that are defined with respect to a specific reference point.

See page 5 of http://www.reed.edu/physics/faculty...ractional Calculus/A. Fractional Calculus.pdf for an elaboration of those ideas.
 
  • #6
127
5
That isn't one of the assumptions used in fractional calculus.

Do you consider it a flaw of ordinary calculus that applying anti-differentiation twice to ##x^1## fails to produce ##x^3/6## and instead produces ##x^3/(6) + C + C'x##?

The definition of "anti-differentiation" in ordinary calculus is often stated as process that does not produce a unique function. Instead it produces a family of functions. The process-oriented definition of "anti-differentiate ##f(x)##" is to state a family (i.e. a set) of functions ##\mathbb{F}## such that that each member ##\mathbb{F}## of the family has a derivative equal to ##f(x)##. For example the statement ##\int x \ dx= x^2/2 + C## amounts to defining ##\int x\ dx## as a family of functions whose distinct members are defined by picking different values of ##C##.

To define anti-differentiation as a specific "operator", we must define it so it produces a unique answer. (An operator is a function whose domain is a set of functions and whose co-domain is a set of functions. A function must map each element in the domain to a unique element in the co-domain). The usual way to define a unique anti-differentiation operator is to take a specific "reference point" (also called a "fiducial point" by some authors). For example, taking the value ##a ## as the reference point, we can define the anti-derivative operator ##D_a^{-1} f(x)## as ##D_a^{-1} f(x) = \int_a^x f(x) dx##. That definition leaves no "undetermined constant" in the result because ##F(x) = D_a^{-1} f(x) ## implies ##F(a) = 0## so we have a "boundary value" that determines any constant that temporarily appears using the usual rules for the process of anti-differentiation. (i.e. ##D_a^{-1} f(x) = F(x) - F(a)## so the specific constant is ##-F(a)## and ##F(x)## is a specific function, not a family of functions. )

For example ##D_6^{-1} x = \int_6^x x\ dx = x^2/2 - 36/2 ##.

The usual approach in the fractional calculus is to use anti-derivative operators that are defined with respect to a specific reference point.

See page 5 of http://www.reed.edu/physics/faculty/wheeler/documents/Miscellaneous Math/Fractional Calculus/A. Fractional Calculus.pdf for an elaboration of those ideas.
Thanks. That solved my issue. But, why in fractional calculus the integration isn't defined to give a family of functions if getting multiple constants in the end isn't much problem?
 
  • #7
Stephen Tashi
Science Advisor
7,353
1,357
Thanks. That solved my issue. But, why in fractional calculus the integration isn't defined to give a family of functions if getting multiple constants in the end isn't much problem?
If you look at various versions of the fractional calculus, you see that both fractional differentiation and fractional integration are defined as operators involving some reference point. (Sometimes the reference point is in implicitly taken as ##a = 0## or ##a = -\infty##). So if you try to define the process of integration of ##f(x)## as "state all possible functions whose derivative is ##f(x)##", you are involved in considering reference points since differentiation involves a reference point.

So the first question we should ask is "Why is fractional differentiation defined in terms of a reference point?"

The definition of the derivative of ##f(x)## in ordinary calculus is only referenced to the value "##x##". By contrast the various definitions of fractional derivative of ##f(x)## involve (implicitly or explicitly) an additional value besides ##x##. I suppose the only way to see why that is so would be to try to create a definition of fractional derivative that did not involve a reference point - and see what kind of trouble we get into making such a definition useful.
 

Related Threads on Is the theory of fractional-ordered calculus flawed?

  • Last Post
Replies
15
Views
24K
Replies
2
Views
660
Replies
1
Views
6K
Replies
3
Views
866
  • Last Post
Replies
2
Views
943
  • Last Post
Replies
5
Views
11K
  • Last Post
Replies
1
Views
846
  • Last Post
Replies
16
Views
3K
Replies
3
Views
3K
Top