How 2 Notes Combine on Piano Keyboard

  • Thread starter Thread starter pkc111
  • Start date Start date
  • Tags Tags
    Keyboard Notes
AI Thread Summary
A discussion explores how the human ear can perceive two notes played simultaneously on a piano, despite the notes combining into a single wave. The ear performs a Fourier decomposition, allowing it to identify individual frequencies within the complex waveform. Psychoacoustic factors, such as volume differences and frequency proximity, affect our ability to distinguish sounds. The cochlea's hair cells resonate at different frequencies, enabling the detection of multiple sound components simultaneously. Overall, the brain processes these auditory signals to discern pitch and other characteristics, demonstrating the complexity of sound perception.
pkc111
Messages
224
Reaction score
26
A friend of mine says they can tell which 2 notes are being played together on a piano keyboard. How can this be if the 2 notes combine to form a single wave (sum of the 2) ?
 
Physics news on Phys.org
The single wave is still a combination of the two frequencies of the keys. So if you were to look at the frequency content of the wave, it would look like the summation of the frequency content of the waves from each individual key. So as long as you have a way of processing and filtering in the frequency domain (which people can do) then your friend can live up to his or her claims. Still, we have our own set of psychoacoustic faults that make this difficult. For example, we have a hard time distinguishing sounds when they have a large difference in volumes. Sounds with frequencies close together would probably be harder to distinguish (although using the resulting beat may help in that).
 
Musicians can develop what I think is sometimes referred to as "trained musical ears".Their hearing becomes much richer because they can detect subtleties of sound not perceptible by untrained ears.
 
[PLAIN]http://calendar.arvo.org/9/3/12/images/fig01.gif

I still don't really get it, although I am sure you are right.

From the picture above (rowA), to me the resultant wave does not seem to have the frequency of either wave (does it ?), but somehow both frequencies can be detected by the ear at once ?
 
Last edited by a moderator:
The superposition of two harmonic oscillations can be written as a product of harmonic oscillations. E.g.,

\sin \alpha + \cos \sin =2 \sin \left (\frac{\alpha+\beta}{2} \right) \cos \left (\frac{\alpha-\beta}{2} \right ).

For two frequencies not too different thus the first factor is a harmonic nearly of the frequency of the two tones, while the second one is of very low frequency which leads periodic variations in loudness (beats). That's why they depict these two signals in the second row. Of course the correct equation as a function of time is obtained by the one above by setting

\alpha=\omega_1 t=2 \pi f_1 t, \quad \beta=\omega_2 t =2 \pi f_2 t,

while the formulae printed above the figures cannot be right, because they don't depend on time at all!
 
Thank you for your comments.
Im still confused how an ear can make out the two component waves shown in (row A) from the wave received (rowA) ?
 
While the temporal (time) picture of the combined waveform looks confusing, the spectral (frequency) picture of the combined waveform will distinctly demonstrate the two or more unique frequencies contributing to the waveform. Our brains do a lot of signal processing to the sounds we hear. The brain looks at things like dispersion, phase shift (between the sound at either ear), and frequency content to reveal the myriad of information that we perceive automatically. Things like identifying pitch, position of sources and so forth are examples of such processing.
 
The ear doesn't. The brain does.
 
Vanadium 50 said:
The ear doesn't. The brain does.

pkc111 said:
Thank you for your comments.
Im still confused how an ear can make out the two component waves shown in (row A) from the wave received (rowA) ?

See Vanadium 50's very important point above.

Anyway, the ear performs a Fourier decomposition of the incoming sound into its component frequencies. Essentially, by having elements with different resonant frequencies (it's actually very complicated, and still being worked out http://www.cornell.edu/video/index.cfm?VideoID=698).

The notes on a piano are not pure tones, and are composed of many sinusoidal waves, and omitting the lowest frequencies from each note would still render them perceptually separate.

If you play pure tones that are a semitone apart (neighbouring black and white piano keys), you will not hear them as separate notes. You hear one note, oscillating in loudness. As the frequency difference approaches at least one third of an octave, you begin to hear two notes, each of constant loudness. http://www.sfu.ca/sonic-studio/handbook/Critical_Band.html

I believe there is still no definitive understanding of the critical bands in the central nervous system. One theory is http://www.ncbi.nlm.nih.gov/pubmed/9237756.
 
Last edited:
  • #10
OK I can accept that the brain may be a giant computer that can do a Fourier analysis on waves to analyse their components.

I asked a Science teacher today and they gave me another possibility that I wanted to put out there ie:
"There are hairs in the Cochlea which resonate at different frequencies (like antennae). The components of the summed wave are able to effect each antenna separately according to which frequencies match which hair."
This sort of made sense because I imagine radio receivers face the same problem as ears and brains. They would receive a "wave sum" of all the radio waves in the area and would therefore only have to be able to resonate at the frequency of one of the component waves .
Is this a correct analogy?

Thanks for your ideas.
 
  • #11
pkc111 said:
OK I can accept that the brain may be a giant computer that can do a Fourier analysis on waves to analyse their components.

I asked a Science teacher today and they gave me another possibility that I wanted to put out there ie:
"There are hairs in the Cochlea which resonate at different frequencies (like antennae). The components of the summed wave are able to effect each antenna separately according to which frequencies match which hair."
This sort of made sense because I imagine radio receivers face the same problem as ears and brains. They would receive a "wave sum" of all the radio waves in the area and would therefore only have to be able to resonate at the frequency of one of the component waves .
Is this a correct analogy?

Thanks for your ideas.

Yes.

But please see http://www.sfu.ca/sonic-studio/handbook/Critical_Band.html. The discussion in the last 3 paragraphs of http://www.jneurosci.org/content/28/18/4767.long should also be helpful.
 
Last edited by a moderator:
  • #12
I thought an antenna picks up waves of all frequencies, but it's up to you to have a filter with an LCR or bandpass filter?

Is there an acoustic equivalent of the lumped elements for electromagnetism: inductor, capacitor, and resonance?
 
  • #13
Yes, an antenna picks up waves at many frequencies, but it has a natural resonance depending on its length and picks up waves best near its resonant frequency. So you still need electronics to isolate your frequency of interest, but designing the antenna with the frequency in mind boosts performance.

Yes, the hairs in the Cochlea are like antennas in this way. One hair can pick up vibrations at many frequencies, but picks up best near its resonance frequencies. With hairs of different lengths, and thus different resonant frequencies, we literally hear many frequencies at once using different parts of our ear. The brain gets sound in frequency representation, not in time representation. The original question is similar to the question, "Can our eyes see the colors blue and red at the same time?" Yes, because there are different parts of the eye that are tuned to these wavelengths. There are red cone cells, blue cone cells, and green cone cells.
 
  • #14
Thanks everyone for your explanations, they are much appreciated.

I guess the question for me now is:

"Why does any object resonate at its resonance frequency when the wave sum that arrives is not at the resonant frequency of the object, but is rather only the result of the resonant frequency wave added to many others to create the odd sort of shape wave eg shown at the end of row A above. ?"

Cheers
 
  • #15
Most objects have many resonance frequencies. An object that has one resonance frequency is the simple pendulum.

Basically, the object itself must has a "natural frequency, at which it oscillates if left on its own after an initial perturbation. If an external force drives the object with a driving frequency that matches the natural frequency, the external force will evoke large oscillations. If the external force drives the object with a driving frequency that is far from the natural frequency, the external force will evoke small oscillations. So when there are two external forces of equal strength with different driving frequencies, the evoked response will contain both frequencies, but its amplitude will be much larger for the frequency that is closer to its natural frequency.

http://farside.ph.utexas.edu/teaching/315/Waves/node12.html
 
Last edited:
  • #16
Thanks atyy:
What are the "two forces of equal strength" ? The resultant wave of 2 waves summing only has 1 frequency right ?
How can "the evoked response will contain both frequencies " ? An object can only vibrate with a response of 1 frequency right ?
Im obviously missing something sorry...
 
  • #17
chrisbaird said:
Yes, an antenna picks up waves at many frequencies, but it has a natural resonance depending on its length and picks up waves best near its resonant frequency. So you still need electronics to isolate your frequency of interest, but designing the antenna with the frequency in mind boosts performance.

An infinitismal antenna should pick up EM waves of any frequency. If you have an antenna longer than the wavelength of the wave hitting it, then I think you would have to worry if the wave hits it obliquely, i.e., the wave number has a component in the direction of the antenna, since this would cause destructive interference since different parts of the antenna would differ in phase.

I think where resonance comes in would be where the transmission line connects the antenna to the receiver. So if your antenna has a length of half the wavelength of the chosen frequency, then the impedance is set at around 73 Ohms, which would require the transmission line to have the same impedance. So a wave of a different frequency hitting the antenna would have a different impedance, while the line was set to 73 Ohms, so you get reflection of the power by the receiver (where does this power go, back out the antenna?).

Yes, the hairs in the Cochlea are like antennas in this way. One hair can pick up vibrations at many frequencies, but picks up best near its resonance frequencies. With hairs of different lengths, and thus different resonant frequencies, we literally hear many frequencies at once using different parts of our ear. The brain gets sound in frequency representation, not in time representation. The original question is similar to the question, "Can our eyes see the colors blue and red at the same time?" Yes, because there are different parts of the eye that are tuned to these wavelengths. There are red cone cells, blue cone cells, and green cone cells.

Would a hair be like a quarter-wave antenna, since there is only one follicle sticking out? What would be the grounding plane?
 
  • #18
pkc111 said:
"Why does any object resonate at its resonance frequency when the wave sum that arrives is not at the resonant frequency of the object, but is rather only the result of the resonant frequency wave added to many others to create the odd sort of shape wave eg shown at the end of row A above. ?"

The object should vibrate with all frequencies hitting it. But for frequencies near resonance, it'll vibrate even more, and for frequencies away from resonance, it'll vibrate very little, so that your mind probably ignores the very little or might not even be able to detect it.

The odd shape in row A is just the addition of waves, and not resonance. It looks like it demonstrates the phenomena of beating, not resonance.
 
  • #19
Let me put it another way to show you my confusion:

If a C and an F are played on a piano, the resultant wave sum which reaches the ear has a frequency different to both C and F (right ?)

Why would detectors of C and F (elements with resonant frequencies at these pitches eg hairs) vibrate especially in preference to other elements which have a resonance frequency closer to that of the wave sum ?
 
  • #20
Tuning forks and signal generators can produce sounds of one single frequency only.Such sounds are called pure sounds and are very rare.The majority of sounds are impure and contain a mixture of frequencies.In this mixture there is a note called the fundamental which has the biggest amplitude(is the loudest) and which has the lowest frequency.The fundamental note is accompanied by a series of increasing frequency and reducing amplitude higher frequency harmonic notes.
Now to your question.To make it simpler imagine that the C and F were pure notes emitting the fundamentals only.What would happen is that your ear/brain system would react to both the C and the F and so you hear the two notes together.If now you extend this to take into account the fact that the C and F are impure then your ear/brain system would react to the whole mixture of frequencies and amplitudes that are within your audio range.
 
  • #21
RedX said:
An infinitismal antenna should pick up EM waves of any frequency. If you have an antenna longer than the wavelength of the wave hitting it, then I think you would have to worry if the wave hits it obliquely, i.e., the wave number has a component in the direction of the antenna, since this would cause destructive interference since different parts of the antenna would differ in phase.

I think where resonance comes in would be where the transmission line connects the antenna to the receiver. So if your antenna has a length of half the wavelength of the chosen frequency, then the impedance is set at around 73 Ohms, which would require the transmission line to have the same impedance. So a wave of a different frequency hitting the antenna would have a different impedance, while the line was set to 73 Ohms, so you get reflection of the power by the receiver (where does this power go, back out the antenna?).



Would a hair be like a quarter-wave antenna, since there is only one follicle sticking out? What would be the grounding plane?

A Hertzian dipole may work at all frequencies but it is absurdly inefficient because it isn't resonant. What you want is an antenna that is half-wavelength in size because then the excited currents satisfy a resonant mode on the antenna. That is, the excited currents naturally satisfy the boundary conditions imposed by the physical structure of the antenna. However, this only strictly applies to wire dipole antennas. A wire dipole antenna of orders greater than a half-wavelength are undesirable because some versions try to force infinite output impedance and they all suffer from poorer performance in the transmitted power because some sections of the antenna will be out of phase from others causing cancellation (though the obliqueness of the receiving/transmitted wave is not a factor in this). But antennas that are not wire dipoles can greatly improve if they are multiple wavelengths in size due to the increased physical aperture that they now present (take for example a parabolic dish antenna). Or we can also modify the structure of a wire dipole so that it can be efficiently operated even at lengths longer than half-wavelength (like in the case of a folded dipole).

This all applies in explaining how the cochlea works though. The cochlea is a long spiral shell lined with hairs of increasing length. Resonance occurs because any vibration on the hair has to satisfy the boundary conditions that the hair presents. Take for example the case of two lads holding a rope. One lad holds his end firm while the other moves his end up and down causing a sine wave to be produced. Now, if he is to move the rope easily, then any sine wave that he creates must satisfy the boundary condition that at the end of the rope is a nodal point. This means that he can only excite waves that have wavelengths that are integer multiples of twice the length of the rope. Other frequencies will try to move the rope at the end. But since this end is held, the power on the rope is dissipated and the excitation dies out.

This also means that our rope can be excited by multiple frequencies. But in the cochlea, as I recall, the first hairs are the shortest and thus will extract the energy of the higher frequencies. Thus, this diminishes the problem of multiple harmonics being excited by the longer hairs since by the time the higher frequency modes reach the long hairs, they would have been largely sapped of their energy.
 
  • #22
Resonant oscillations can also be called fundamental modes. These are the simplest natural oscillations the object can experience. But the object can also experience more complicated motion which is a sum of the fundamental modes.
 
  • #23
Born2bwire said:
This also means that our rope can be excited by multiple frequencies... This means that he can only excite waves that have wavelengths that are integer multiples of twice the length of the rope. .

OK, so does that mean when C and F are played together on a keyboard, the wave sum has a wavelength that is some simple multiple of both C and F so that detectors (resonating elements tuned to C and F) preferentially vibrate when the wave sum reaches them ?
 
  • #24
Resonance will only occur when the appropriate hair gets 'joggled' at the appropriate frequency - irrespective of all the other unrelated vibrations that are going on in the air (i.e. all the other sounds / notes). Variations in pressure that are not at its resonant frequency will produce no net effect on it because the nudges the hair gets will not build up any vibrational energy.
Something that may not have been pointed out is that all the hairs will vibrate a small amount in step with the overall pattern of the sound. It's just that, when there is a sound with the natural frequency of the hair, it will vibrate more. This is exactly the same as for the circuits in a radio receiver. There will be picoVolts of general alternating voltage (along with the totality of signals coming down the feeder) but, when an appropriate signal arrives, the result will be voltages which are greater than the noise level and which can be detected / demodulated.
 
  • #25
pkc111 said:
OK, so does that mean when C and F are played together on a keyboard, the wave sum has a wavelength that is some simple multiple of both C and F so that detectors (resonating elements tuned to C and F) preferentially vibrate when the wave sum reaches them ?

The sum is simply the sum, so both frequencies are still present - in the Fourier (sinusoidal) sense.
 
  • #26
atyy said:
The sum is simply the sum, so both frequencies are still present - in the Fourier (sinusoidal) sense.

This is what I don't understand :the sum is differnt to the components: just as both 2 and 3 are different to five.

How does something as simple as a hair do a Fourier decomposition on a wave sum to respond to one of the components that went into the wave summation ?
 
  • #27
You don't add the frequencies but the sines/cosines. Any "sufficiently nice" function (see your favorite book on calculus for what "sufficiently nice" means) can be expressed as its Fourier transform and this is a one-to-one correspondence:

f(t)=\int_{\mathbb{R}} \frac{\mathrm{d} \omega}{2 \pi} \tilde{f}(\omega) \exp(-\mathrm{i} \omega t),
\tilde{f}(\omega)=\int_{\mathbb{R}} \mathrm{d} t \exp(+\mathrm{i} \omega t).

So, by Fourier transformation you get the decomposition of any signal in terms of it's spectrum and vice versa.

If you add simply sines and cosines, the spectrum becomes a sum of Dirac-\delta distributions.
 
  • #28
Born2bwire said:
A Hertzian dipole may work at all frequencies but it is absurdly inefficient because it isn't resonant. What you want is an antenna that is half-wavelength in size because then the excited currents satisfy a resonant mode on the antenna. That is, the excited currents naturally satisfy the boundary conditions imposed by the physical structure of the antenna. However, this only strictly applies to wire dipole antennas. A wire dipole antenna of orders greater than a half-wavelength are undesirable because some versions try to force infinite output impedance and they all suffer from poorer performance in the transmitted power because some sections of the antenna will be out of phase from others causing cancellation (though the obliqueness of the receiving/transmitted wave is not a factor in this).

If you have a parallel two-wire transmission line of length \frac{\lambda}{2} that is open at the end, then you get standing current waves in each line, the beginning and ends being nodes. This would be ideal since the currents on wire 1 are all in the same direction, and the currents on wire 2 are all in the same direction. Unfortunately, the current in wire 1 is in the opposite direction of the current in wire 2, so the two wires cancel each other out as far as radiation is concerned, even though each wire does not cancel itself out. So if you bend wire 2 an angle of 180 degrees, you should have an ideal antenna. But this antenna would have total length \lambda, and not \frac{\lambda}{2}. So wouldn't an antenna a wavelength in size be better than an antenna half a wavelength in size?

Also, if sound is received through an antenna, then would it need counterpoise, or is it end-fed? I've seen a diagram of an LC-resonant circuit with a single line half a wavelength long sticking out the side representing the antenna. My guess is that the LC circuit performs some filtering function, even though the antenna is already resonant just by its dimensions (at a length \frac{\lambda}{2} the antenna should act as a series LCR circuit).
 
  • #29
Too much emphasis is being placed on the "sum" of the two waves.What actually arrives at each ear is two separate sets of notes(two separate sums if you prefer each sum being a mixture of a range of frequencies).It's true that these two sets overlap and interfere to produce a resultant "sum" but this is an incredibly complex resultant amongst other things being different at different locations.A key point is that each ear receives and processes two separate notes each one containing a range of frequencies and amplitudes.The electrical signals so produced are then carried to the brain to be processed.This brings in a second key point which is that in order to consider hearing we have to consider not the ear on its own but the ear brain system.We may know a lot about the mechanics of the ear but we know a lot less about how the brain processes and interprets the signals it receives.
 
  • #30
There is only ONE value of air pressure on the ear drum at any given time. That (single) signal may have been created by a number of sources or just one and the ear / brain just analyses it as best it can. Fourier analysis is just Maths and is one way of looking at things but the analysis of the ear is based on such a 'frequency domain analysis.
@vanhees71
A "simple hair" doesn't do an analysis. It just responds to a single frequency. Together, all the hairs give an analysis of the frequencies that are striking your ear. There is more to it than that because the time of arrival of pulses is also taken into consideration by the brain - to give an idea of distance and spacings.
 
  • #31
Just so I can understand how sound waves add, and how things resonate:

If a wineglass can be made that shatters when a loud enough high C is played (beacuse the resonace frequency of the winegalss is a high C).If a high C (just loud enough) and an F (same volume) were played at the same time, would the wineglass shatter ?

Thanks
 
  • #32
For sounds to exist and be detected the pressure must vary with time and it is variations such as this that the ear responds to.We can analyse this as a resultant effect but when we do so and by reference to the OPs question we find that the resultant is due to the two separate sets of waves.
 
  • #33
pkc111 said:
This is what I don't understand :the sum is differnt to the components: just as both 2 and 3 are different to five.

How does something as simple as a hair do a Fourier decomposition on a wave sum to respond to one of the components that went into the wave summation ?

The basic idea is the same as how a pendulum responds best to an external force of one frequency. Take a look at Fig 9 in http://farside.ph.utexas.edu/teaching/315/Waves/node12.html.

In the ear, the frequency tuning is sharpened by living processes. The membrane on which the hairs lie is itself tuned even in the dead animal. However, the tuning is sharper in the living animal. We still don't understand all the processes that sharpen the frequency tuning.
http://147.162.36.50/cochlea/cochleapages/overview/history.htm
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2630119/?tool=pubmed
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2724262/?tool=pubmed
 
Last edited:
  • #34
Dadface said:
For sounds to exist and be detected the pressure must vary with time and it is variations such as this that the ear responds to.We can analyse this as a resultant effect but when we do so and by reference to the OPs question we find that the resultant is due to the two separate sets of waves.

"Due to separate waves"? That's only a way of looking at it and doesn't tell the whole story unless you include phase information too. With just two sine waves you can synthesise a whole range of waveform shapes.
 
  • #35
The notes from a piano keyboard (and the majority of other sound sources) is not pure so the resultant is due to two separate sets of waves.If we choose to find the resultant we do indeed need to include information such as phase differences and amplitudes.This becomes a very thorny problem due to the fact that what actually arrives at the ear is,amongst other things,due to the geometry and structure of the room and the location of the listener.
What should be given more emphasis here is aknowledgement of the part that the brain plays in interpreting the signals it receives.To a large extent we learn to hear and I would bet that the OPs friend has such a sensitive "musical ear" that he/she could distinguish the two notes whether they were played in an anechoic chamber or a place such as the whispering gallery of St Pauls cathedral,which has a very long reverberation time.
 
  • #36
Hey guys I don't want to hijack this thread, however I remember I had read something on another forum about a very similar topic and a member said something that goes in counter of
atyy said:
Anyway, the ear performs a Fourier decomposition of the incoming sound into its component frequencies.
I myself do not understand most of the subject (so I'm here to give insights using other people since I'm not competent). The thread is http://forums.futura-sciences.com/physique/480015-serie-de-fourier-perception-auditive.html.
The member is LPFR and he said
Pour finir, l'oreille humaine n'est pas sensible à la phase des signaux mais uniquement à la puissance. Je me souviens qu'il a fallu faire la manip avec deux synthétiseurs (plus oscillo plus haut-parleur) pour convaincre un collègue que l'oreille humaine ne calculait pas ni la transformée ni la série de Fourier.
which would be something like
To finish, the human ear is nonsensitive to the phase of signals but is sensitive only to the power. I remember that we had to do the experiment with 2 synthesizers (aside with an oscilloscope and a speaker) to convince a colleague that the human ear does not calculate the Fourier transform nor the Fourier series.
There are some links in the mentioned thread (some of them in English). I'm not sure LPFR is right though but from the quality of all his posts and his experience, if he's wrong on this then there must be a "nasty" reason.
 
  • #37
fluidistic said:
Hey guys I don't want to hijack this thread, however I remember I had read something on another forum about a very similar topic and a member said something that goes in counter of
I myself do not understand most of the subject (so I'm here to give insights using other people since I'm not competent). The thread is http://forums.futura-sciences.com/physique/480015-serie-de-fourier-perception-auditive.html.
The member is LPFR and he said which would be something like
There are some links in the mentioned thread (some of them in English). I'm not sure LPFR is right though but from the quality of all his posts and his experience, if he's wrong on this then there must be a "nasty" reason.

The ear is not sensitive to phase. So you could add cos(w1t)+cos(w2t+\phi_A) and it would sound the same as cos(w1t)+cos(w2t+\phi_B) even though the two waveforms can look really different on an oscilloscope.

So the Fourier transform a(w) of the wave x(t) would be be determined up to a phase a(w)e^{i\phi(w)} which the ear cannot determine.

So in effect the ear would only measure |a(w)|2, which is the power spectrum by Parseval's theorem. So in that sense you can say the ear only measures power.
 
Last edited:
  • #38
I remember that one of the strategies they use when trying to get maximum perceived loudness on, for instance, radio broadcasts, is to mangle the phase in order to restrict the peaks of audio waveforms - thus allowing them to jack up the general level without exceeding 100% modulation. The opinion is that you can't hear the difference - but, by that time, the programme is so distorted and compressed that 'quality' can be ignored.

But there is the time of arrival information too. Most sounds we hear are not, in fact, continuous so they do not analyse into continuous sinusoids. There is a lot of extra information that the brain seems to get out of the received binaural sound. Stereo works largely on the basis of relative amplitudes from left and right speakers but people can get far more information than that about direction of real life sound sources. There's about 1ms time difference in times of arrival at right and left ears so I guess we are able to resolve that.
 
  • #39
fluidistic said:
Hey guys I don't want to hijack this thread, however I remember I had read something on another forum about a very similar topic and a member said something that goes in counter of
I myself do not understand most of the subject (so I'm here to give insights using other people since I'm not competent). The thread is http://forums.futura-sciences.com/physique/480015-serie-de-fourier-perception-auditive.html.
The member is LPFR and he said which would be something like
There are some links in the mentioned thread (some of them in English). I'm not sure LPFR is right though but from the quality of all his posts and his experience, if he's wrong on this then there must be a "nasty" reason.

He is talking about the perceptual ear, since he is using keyboards to demonstrate. The OP did ask about the perceptual ear, and the brain was mentioned early in this thread by several posters. The topic of phase is very complicated, but there certainly is a sense in which the perceptual ear is phase deaf. A simple example that shows that that has to be qualified is that white noise and a click both have the same Fourier frequencies, differing only in phase, and are clearly perceived as different.

However, we seem to have drifted to talking about the physical ear for the moment. That does perform a Fourier transform. A little more accurately, it is well-modelled as a bank of gammatone filters.
http://www.pdn.cam.ac.uk/groups/cnbh/aimmanual/BMM/gtfb.htm
http://www.dicklyon.com/tech/Hearing/APGF_Lyon_1996.pdf
http://www.tu-harburg.de/ti6/pub/diss/solbach/index.html
 
Last edited by a moderator:
  • #40
fluidistic said:
Hey guys I don't want to hijack this thread, however I remember I had read something on another forum about a very similar topic and a member said something that goes in counter of
I myself do not understand most of the subject (so I'm here to give insights using other people since I'm not competent). The thread is http://forums.futura-sciences.com/physique/480015-serie-de-fourier-perception-auditive.html.
The member is LPFR and he said which would be something like
There are some links in the mentioned thread (some of them in English). I'm not sure LPFR is right though but from the quality of all his posts and his experience, if he's wrong on this then there must be a "nasty" reason.

Yeah, it probably doesn't do exactly a Fourier transform in that the phase information is not preserved. However, it at least does read the power spectrum of the signal, if you will. In addition, the ear still does process some form of phase information in how it judges the positioning of sounds. It does this from the relative phase shift between ears that results in the time delay from the different path lengths between the source and the ear.

One of the developments of interest in hearing aids and cochlear implants is the preservation of binaural information. A problem with using old fashioned hearing aids is that the listener suffers from the "cocktail party" effect. If they are listening to a speaker in an environment populated by other sounds, they have trouble isolating the desired speaker (in effect they lose the person among the others in the cocktail party). One of the reasons for this was that the hearing aids removed the binaural information that allowed the brain to lock on to a source and filter other sounds out. One way they are working to get around this is by having hearing aids that communicate information between each other. Instead of only working as individual left and right hearing aids, they get the sounds from both left and right and perform some processing to help preserve binaural information.

So the ear does not recognize phase monoaurally but it does measure phase binaurally.
 
Back
Top