Potential Fourier Analysis Metrics?

Click For Summary
SUMMARY

This discussion focuses on the application of weighted Fourier Transforms for processing sensor input data in neural networks. The proposed metrics include continuous and discrete Fourier integrals that emphasize recent inputs through exponential weighting. The continuous metric is defined as F_c(jω, α, t) and the discrete metric as F_d(jω, α, n), both designed to enhance feature selection for neural network training. The conversation highlights the potential for improved learning by utilizing frequency-based metrics derived from Fourier Analysis.

PREREQUISITES
  • Understanding of Fourier Transforms, both continuous and discrete
  • Familiarity with neural network architectures and input processing
  • Knowledge of feature selection techniques in machine learning
  • Basic proficiency in mathematical concepts related to exponential functions
NEXT STEPS
  • Research "Weighted Fourier Transforms in Neural Networks"
  • Explore "Feature Selection Techniques in Deep Learning"
  • Study "Fourier Analysis Applications in Signal Processing"
  • Investigate "Neural Network Input Processing Strategies"
USEFUL FOR

Data scientists, machine learning engineers, and researchers interested in optimizing neural network performance through advanced input processing techniques.

X89codered89X
Messages
149
Reaction score
2
So, my friend looked at this post and told me it's beyond confusing. So let me clarify.

Suppose I have a neural network connected to various sensors. How best would I process the input data from the sensor such that a neural network could learn from it best. I'm assuming my network has many, many inputs, so perhaps I could input all types of processed input from my sensors. Forget about processing power limitations for now. I'm simply asking, what types of frequency input would be useful for a neural network to learn from?

I thought of one metric. I was considering weighting recent inputs higher in a Fourier integral as below.Continuous:

[itex]{\bf{F}}_c(j\omega,\alpha,t) = \int^t_{t_0} f(\tau)e^{(\tau -t)\alpha}e^{-j\omega\tau}d\tau, \; \alpha >0[/itex]

and Discrete (which I would use on a comp):

[itex]{\bf{F}}_d(j\omega,\alpha,n) = T\sum^n_{k=k_0} f_ke^{[k-n]\alpha T}e^{-j\omega Tk}, \; \alpha >0[/itex]

The idea is that they are weighted Fourier Transforms such that the exponential terms in the integrand and summand are such that since [itex]\tau -t \leq 0, k-n \leq 0[/itex] ,then [itex]e^{(\tau -t)\alpha} \leq 1, e^{(k -n)T\alpha} \leq 1[/itex] so that most recent terms are weighted exponentially more.

Then, on the network inputs, I could input values of my weighted Fourier transforms from several values of time and several frequency values, so perhaps a whole matrix of inputs from this frequency metric.

I could even do the same thing for the continuous time inputs

Continous:
[itex]{\bf{f}}_c(\alpha,t) = f(t)e^{(\tau -t)\alpha}[/itex]

Discrete:
[itex]{\bf{f}}_d(\alpha,n) = f_ne^{[k-n]\alpha T}[/itex]

and then, I can get a vector of inputs if I input the last so many sample values from this metric

Would this be a good "feature selection"?
 
Last edited:
Engineering news on Phys.org
There are labs out there that are already attempting to do something very similar to what you are writing about. Analyzing a neural signal with the Fourier Transform is nothing new. How I envisioned back in the day is that each different network responds to a different frequency. So after the Fourier Analysis of a neural network, you would build a filter (input and output) specific to that part of the network. Plenty of example code out there for that.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
3
Views
2K