Hi, I'm trying to understand digital audio on my own for brain modelling research (i'm trying to wire up input to neuralnet, which ive done with a vision component using a camera). [edited...] i forgot to mention that the ultimate goal is to have microphone input and speaker output. I've made a similar inquiry in the computer->software forum but was focused on software rather than the dsp part, and with no reply. In this post i'd rather focus on the dsp/audio part. I use the Windows OS Questions : [Q1] What is the format of raw sound/wave buffer (that can be played by soundcard) and is it platform and hardware(soundcard) dependent? In image buffers you have some order of RGBA or some other colour scheme. Is this the same for the soudn buffer some sequence of [t,Hz,Phase,Amplitude]? Is this format PCM? and links to references would help. [Q2] Is the raw sound buffer the same input that should go into a FFT [Q3] What is displayed in the 3rd axis of spectrogram [t vs Hz vs ###]? Is it the PSD of FFT and is the PSD just log(sqrt(||(Re,Im)||)) or is it just the Re component or somthing to do with amplitude?? [Q4] Is it better to perform audio analysis with FFT or wavelet [ spectrogram vs scaleogram? I came up on a link that says wavelets are better for both Hz and time resolution. THanks for the time, I'm really new to DSP. Oh ya, any links to newsgroups where i can ask these questions would also help. Most of my google searches come up with how to play sound, and with window functions, sample rates and other parameters to wave file. But doesn't actually say what goes into a sound buffer.