I've been trying to figure out why it's standard to use complex discrete Fourier transforms instead of just the real version. It's discussed a bit here. http://dsp.stackexchange.com/questions/1406/real-discrete-fourier-transform As far as I can tell there's a hypothetical efficiency argument. Am I wrong? I notice that a lot of computer algorithms are using the real version. For instance ffmpeg tends to use the real version as far as I can tell. It seems to me that before computers, complex numbers were used to aid in hand computation. For instance it's easier to view sinusoidal functions as a power function of e than it is to calculate sines and cosines when you're doing it by hand. Am I completely wrong about this analysis? In the case of Fourier transforms do I get missing data or have reduced accuracy when I use the real version instead of the complex version? Thanks.