Fast Walsh Transform for Seismic Autocorrelation

  • Thread starter Thread starter manzana
  • Start date Start date
Click For Summary
SUMMARY

The discussion centers on implementing a real-time 512-point autocorrelation for seismic data using the Fast Walsh Transform (FWT) due to performance limitations on a low-power microcontroller. The user has explored alternatives like the Wiener-Khinchin theorem and FFT methods in Scilab but encountered issues with performance and accuracy. The consensus suggests that FWT can significantly reduce computational load compared to traditional methods, making it suitable for battery-operated devices. The user emphasizes the importance of testing against known good methods to validate the implementation.

PREREQUISITES
  • Understanding of Fast Walsh Transform (FWT)
  • Familiarity with Wiener-Khinchin theorem
  • Experience with FFT algorithms
  • Basic knowledge of Scilab or similar computational tools
NEXT STEPS
  • Research Fast Walsh Transform algorithms and their implementations
  • Learn about the Wiener-Khinchin theorem and its applications in autocorrelation
  • Explore FFT optimization techniques for low-power devices
  • Investigate testing methodologies for validating autocorrelation outputs
USEFUL FOR

Engineers and developers working on seismic data processing, embedded systems programmers, and researchers focused on optimizing autocorrelation techniques for low-power applications.

manzana
Messages
12
Reaction score
2
I need to do a realtime 512 point autocorrelation for a seismic project but my poor little Parallax computer is getting swamped by all the floating point multiplies. The answer seems to be in the fast Walsh Transform. I bought some IEEE papers on the subject but they are a little deep! Does anyone have any words of encouragement? According to Wiener-Khinchin I can get the same effect by taking the fft of the data, multiplying by the complex conjugate, and taking the inverse FFT. I tried this too on Scilab but it doesn't seem to really work. Any thoughts appreciated.
 
Physics news on Phys.org
When testing code, two approaches I like to use are using an input for which the output is known and testing the output between a known good method and the code being tested.

If you have a known good method that is slow, test with that, possibly on a faster computer like a PC. Even slow Fourier methods are fast on modern PCs with only 512 data points.
 
Thanks for the reply. This is for a remote sensing application with very limited power available. A PC is not an option but I do use it to verify my code. I am using a very low power (battery supplied) micorcontroller. I need to do a 512 point autocorrelation every 5 seconds and the brute forced lagged-product technique is too processor intensive. I know the autocorrelation can be done much simpler and faster using Walsh, for example. I am sure Matlab uses some such technique.since it can do huge autocorrelations instantly. All of these techniques are similar with butterflies and exotic orderings. But if you are a newcomer it is hard to see the forest for the trees. It is more like solving a Rubbix Cube than calculus. 'Preciate it...
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K