- #1
Twinbee
- 117
- 0
With DFT/FFT/SFTF or similar Fourier transforms, there's always a trade off between frequency and time resolution. It's like the 'uncertainty principle' for signal analysis.
I have a few questions regarding this:
1: With a theoretical infinitely fast processor, is it possible to obtain both good time AND frequency resolution? Which algorithm may help here?
2: How much better are wavelets for this sort of thing? If wavelets are better than FFT/SFTF, then could there be something better than even wavelets? What's the ceiling theoretically?
3: With the help of an incredibly fast CPU, one idea I thought of would be to analyze all possible sets of frequencies, amplitudes, and offsets of individual sine waves, mix them, and see which combination produces a result closest to a given window. Some signals/sounds may require one or two sine waves to come close, whilst others may require hundreds or even thousands of mixed sine waves (each with their own amplitudes, phase and frequency) to come close. Would this whole idea get close to perfection for signal analysis?
4: What's the difference between DFT of a given window length and SFTF?
5: Since the SFTF introduces only a rough approximation of the original signal (with 'blurring' around each frequency found), how can it be so effective with pitch shifting which it is commonly used for?
The reason for asking all this is because I'd love a spectragram VST in the future to display a more accurate analysis of any sound.
I have a few questions regarding this:
1: With a theoretical infinitely fast processor, is it possible to obtain both good time AND frequency resolution? Which algorithm may help here?
2: How much better are wavelets for this sort of thing? If wavelets are better than FFT/SFTF, then could there be something better than even wavelets? What's the ceiling theoretically?
3: With the help of an incredibly fast CPU, one idea I thought of would be to analyze all possible sets of frequencies, amplitudes, and offsets of individual sine waves, mix them, and see which combination produces a result closest to a given window. Some signals/sounds may require one or two sine waves to come close, whilst others may require hundreds or even thousands of mixed sine waves (each with their own amplitudes, phase and frequency) to come close. Would this whole idea get close to perfection for signal analysis?
4: What's the difference between DFT of a given window length and SFTF?
5: Since the SFTF introduces only a rough approximation of the original signal (with 'blurring' around each frequency found), how can it be so effective with pitch shifting which it is commonly used for?
The reason for asking all this is because I'd love a spectragram VST in the future to display a more accurate analysis of any sound.
Last edited: