Physical insight into integrating a product of two functions

Buddhapus17
Messages
4
Reaction score
0
I was wondering what the physical insight is of integrating a product of two functions. When we do that for a Fourier transform, we decompose a function into its constituent frequencies, and that's because the exponential with an imaginary x in the transform can be seen as a weighting function that looks at the frequency in the original signal. I'm not sure if this kind of logic is accurate.

In particular, what caused this question is the consideration of the Riemann-Liouville fractional derivative (definitions can be seen here: http://www.hindawi.com/journals/mpe/2014/238459/). We are integrating a function with respect to a power law (x - ε)n - α - 1 so it's a similar case. Does the power law "weigh" the information of the function it multiplies? Is there another way to think about it?

Any help is appreciated!
 
Physics news on Phys.org
Buddhapus17 said:
integrating a product of two functions.

You could call \int f(x) g(x) dx "integrating the product of two functions", but the situation you are wondering about is of the form \int f(x) g(s,x) dx.

Consider functions defined on a finite number of values of x (e.g. (f(1), f(2),...f(10) } [/itex] ) then the analogy to \int f(x) g(x) dx is the inner product \sum f(k) g(k). If you had an set of orthonormal basis vectors (functions) \{g_1(x),g_2(x)...g_n(x) \} then to represent a function f(x) in that basis you would use the inner product to compute coefficents c_s = \sum f(k) g_s(k) (which is analgous to \int f(x) g(s,x)). The vector of coefficients (c_1, c_2,...c_n) can be thought of as a function c(s) defined on the finite number of values s = 1,2,...n.

So with the proper choice of a function g(s,x) one mathematical interpretation of \int f(x) g(s,x) is a transformation of a function f(x) to a continuous function C(s) that , loosely speaking, represents the coefficents used in expressing f(x) in terms of an (infinite) set of basis functions g(s,x) where s is a "continuous" index for the basis functions.
 
  • Like
Likes Buddhapus17
Stephen Tashi said:
So with the proper choice of a function g(s,x) one mathematical interpretation of \int f(x) g(s,x) is a transformation of a function f(x) to a continuous function C(s) that , loosely speaking, represents the coefficents used in expressing f(x) in terms of an (infinite) set of basis functions g(s,x) where s is a "continuous" index for the basis functions.

So to clarify, in the case of when we have a f(ε) and g(ε, x) with g(ε, x) = (x - ε)n - α - 1, the power law is a continuous basis function with a continuous index x, and by integrating f(ε) * g(ε, x) we find continuous coefficients that represent the function f(ε) in the basis of that power law.
 
Buddhapus17 said:
and by integrating f(ε) * g(ε, x) we find continuous coefficients that represent the function f(ε) in the basis of that power law.

I don't know if the particular function g(\epsilon,x) = (x - \epsilon)^{n -\alpha -1} behaves like an orthonormal family of basis functions. i.e. if C(x) = \int f(\epsilon) g(\epsilon,x) d\epsilon can f(\epsilon) be reconstructed by an inverse transform f(\epsilon) = \int C(x) g(\epsilon,x) dx ? Or f(\epsilon) = \int C(x) h(\epsilon,x) for some other function h ? If not then C(x) isn't analogous to a set of coefficients in a linear combination.
 
Back
Top