Physical insight into integrating a product of two functions

Click For Summary
SUMMARY

The discussion centers on the physical interpretation of integrating the product of two functions, specifically in the context of Fourier transforms and Riemann-Liouville fractional derivatives. It highlights that integrating a product, such as \(\int f(x) g(s,x) dx\), can be viewed as transforming a function \(f(x)\) into a continuous function \(C(s)\) that represents coefficients in terms of a basis of functions \(g(s,x)\). The power law function \(g(\epsilon,x) = (x - \epsilon)^{n - \alpha - 1}\) is examined for its potential as a continuous basis function, raising questions about its orthonormality and the possibility of reconstructing \(f(\epsilon)\) from \(C(x)\).

PREREQUISITES
  • Understanding of Fourier transforms and their applications
  • Familiarity with Riemann-Liouville fractional derivatives
  • Knowledge of inner product spaces and orthonormal basis functions
  • Concept of continuous transformations in functional analysis
NEXT STEPS
  • Explore the properties of Riemann-Liouville fractional derivatives in detail
  • Study the concept of orthonormal basis functions in functional spaces
  • Investigate the implications of continuous transformations in signal processing
  • Learn about the reconstruction of functions from their coefficients in various bases
USEFUL FOR

Mathematicians, physicists, and engineers interested in advanced calculus, signal processing, and functional analysis, particularly those working with Fourier transforms and fractional calculus.

Buddhapus17
Messages
4
Reaction score
0
I was wondering what the physical insight is of integrating a product of two functions. When we do that for a Fourier transform, we decompose a function into its constituent frequencies, and that's because the exponential with an imaginary x in the transform can be seen as a weighting function that looks at the frequency in the original signal. I'm not sure if this kind of logic is accurate.

In particular, what caused this question is the consideration of the Riemann-Liouville fractional derivative (definitions can be seen here: http://www.hindawi.com/journals/mpe/2014/238459/). We are integrating a function with respect to a power law (x - ε)n - α - 1 so it's a similar case. Does the power law "weigh" the information of the function it multiplies? Is there another way to think about it?

Any help is appreciated!
 
Physics news on Phys.org
Buddhapus17 said:
integrating a product of two functions.

You could call \int f(x) g(x) dx "integrating the product of two functions", but the situation you are wondering about is of the form \int f(x) g(s,x) dx.

Consider functions defined on a finite number of values of x (e.g. (f(1), f(2),...f(10) } [/itex] ) then the analogy to \int f(x) g(x) dx is the inner product \sum f(k) g(k). If you had an set of orthonormal basis vectors (functions) \{g_1(x),g_2(x)...g_n(x) \} then to represent a function f(x) in that basis you would use the inner product to compute coefficents c_s = \sum f(k) g_s(k) (which is analgous to \int f(x) g(s,x)). The vector of coefficients (c_1, c_2,...c_n) can be thought of as a function c(s) defined on the finite number of values s = 1,2,...n.

So with the proper choice of a function g(s,x) one mathematical interpretation of \int f(x) g(s,x) is a transformation of a function f(x) to a continuous function C(s) that , loosely speaking, represents the coefficents used in expressing f(x) in terms of an (infinite) set of basis functions g(s,x) where s is a "continuous" index for the basis functions.
 
  • Like
Likes   Reactions: Buddhapus17
Stephen Tashi said:
So with the proper choice of a function g(s,x) one mathematical interpretation of \int f(x) g(s,x) is a transformation of a function f(x) to a continuous function C(s) that , loosely speaking, represents the coefficents used in expressing f(x) in terms of an (infinite) set of basis functions g(s,x) where s is a "continuous" index for the basis functions.

So to clarify, in the case of when we have a f(ε) and g(ε, x) with g(ε, x) = (x - ε)n - α - 1, the power law is a continuous basis function with a continuous index x, and by integrating f(ε) * g(ε, x) we find continuous coefficients that represent the function f(ε) in the basis of that power law.
 
Buddhapus17 said:
and by integrating f(ε) * g(ε, x) we find continuous coefficients that represent the function f(ε) in the basis of that power law.

I don't know if the particular function g(\epsilon,x) = (x - \epsilon)^{n -\alpha -1} behaves like an orthonormal family of basis functions. i.e. if C(x) = \int f(\epsilon) g(\epsilon,x) d\epsilon can f(\epsilon) be reconstructed by an inverse transform f(\epsilon) = \int C(x) g(\epsilon,x) dx ? Or f(\epsilon) = \int C(x) h(\epsilon,x) for some other function h ? If not then C(x) isn't analogous to a set of coefficients in a linear combination.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 43 ·
2
Replies
43
Views
7K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
6K
  • · Replies 6 ·
Replies
6
Views
11K
  • · Replies 5 ·
Replies
5
Views
4K