sophiecentaur said:
I agree with most of your post except for this bit. "Nonsense" implies not making any sense. In fact the hearing system copes very well with phase shift variations all over the audio spectrum. When you think how the voice signal processing in your mobile phone mangles up what hits the microphone yet how you can, not only get the words but recognise who's actually talking to you. 'Vocoding' gets away with murder.
Your hearing doesn't seem to care about the actual values of the time varying sound pressure.
Edit: this all depends on what actual phase delays you're talking about. The anagram "hliTrler" involves a several hundred milliseconds whereas out mid range perceived frequenc involve a time period of just a few ms.
Right, of course this is going to depend on magnitude of the shift and perhaps I've been a bit extreme in my examples.
The other relevant point would be that the different notes that are finite length in time are effectively showing up as amplitude-modulated pulses, so if the envelope doesn't shift in time, a phase shift in the carrier signal (in this instance, the one with the higher frequency of the note being heard) is unlikely to be meaningfully perceived. The issue is that the envelope wave itself has a series of frequency components of its own that would be subject to this hypothetical random shift. To illustrate, I've constructed the plot below.
Each of those signals has an identical amplitude spectrum. The top is the original. The middle only has the note itself shifted but the carrier envelope remains the same. The bottom has a random phase shift applied to every frequency component of the signal. The top two plots would sound the same. The bottom surely would not.
Note the drop in amplitude. Since this is a single pulse, the random phase shift distributes its power over time and so it lowers amplitudes in time without reducing them in the actual spectrum. If this was a signal composed of multiple pulses and frequencies in time, that effect would be much less.
Baluncore said:
A warm and wet inverse transform is not possible because the frequency information is encoded in different separated nerve fibres. Those parallel channels can be correlated. There is no IFT.
My point was not to suggest that the brain is actually physically converting those nerve signals back into an electrical time signal representing the audio, but that you as a conscious human experience the audio as a time-varying signal. It is senses as individual frequencies and experienced as something that varies in time, so that is in some ways an inverse Fourier transform. Maybe it's a metaphysical way more than an actual physical way, but my comment was intended to be illustrative rather than a technical discussion of brain physics.
Baluncore said:
The rate of the brain chemistry limits the frequency at which it is possible to correlate phase. The brain can correlate the crest of LF waves, or the rise of the envelope of HF waves. All that can really be done is to estimate the level of stimulation from the two ears to estimate the direction of the wave.
Right but the issue is that changes in phase across the whole spectrum can entirely scramble the temporal behavior of a signal as illustrated above. For a signal of constant tones, the cilia in your ear aren't likely to notice anything different if one or more are phase shifted, but they will be vibrating at the wrong times if these tones start and stop as you would expect in music or speech.
Baluncore said:
The cochlea is a systolic processor that does NOT scramble the order in time. A click will appear on all fibres at about the same time. The brain ignores those slight differences because phase is not required.
I think you may have missed what I was attempting to convey. I wasn't suggesting that the cochlea was somehow scrambling things in time. My point was that a random set of phase shifts across the entire spectrum of the signal will scramble the signal in time even before it gets to your ear.[/quote]