How 2 Notes Combine on Piano Keyboard

  • Thread starter Thread starter pkc111
  • Start date Start date
  • Tags Tags
    Keyboard Notes
Click For Summary
A discussion explores how the human ear can perceive two notes played simultaneously on a piano, despite the notes combining into a single wave. The ear performs a Fourier decomposition, allowing it to identify individual frequencies within the complex waveform. Psychoacoustic factors, such as volume differences and frequency proximity, affect our ability to distinguish sounds. The cochlea's hair cells resonate at different frequencies, enabling the detection of multiple sound components simultaneously. Overall, the brain processes these auditory signals to discern pitch and other characteristics, demonstrating the complexity of sound perception.
  • #31
Just so I can understand how sound waves add, and how things resonate:

If a wineglass can be made that shatters when a loud enough high C is played (beacuse the resonace frequency of the winegalss is a high C).If a high C (just loud enough) and an F (same volume) were played at the same time, would the wineglass shatter ?

Thanks
 
Physics news on Phys.org
  • #32
For sounds to exist and be detected the pressure must vary with time and it is variations such as this that the ear responds to.We can analyse this as a resultant effect but when we do so and by reference to the OPs question we find that the resultant is due to the two separate sets of waves.
 
  • #33
pkc111 said:
This is what I don't understand :the sum is differnt to the components: just as both 2 and 3 are different to five.

How does something as simple as a hair do a Fourier decomposition on a wave sum to respond to one of the components that went into the wave summation ?

The basic idea is the same as how a pendulum responds best to an external force of one frequency. Take a look at Fig 9 in http://farside.ph.utexas.edu/teaching/315/Waves/node12.html.

In the ear, the frequency tuning is sharpened by living processes. The membrane on which the hairs lie is itself tuned even in the dead animal. However, the tuning is sharper in the living animal. We still don't understand all the processes that sharpen the frequency tuning.
http://147.162.36.50/cochlea/cochleapages/overview/history.htm
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2630119/?tool=pubmed
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2724262/?tool=pubmed
 
Last edited:
  • #34
Dadface said:
For sounds to exist and be detected the pressure must vary with time and it is variations such as this that the ear responds to.We can analyse this as a resultant effect but when we do so and by reference to the OPs question we find that the resultant is due to the two separate sets of waves.

"Due to separate waves"? That's only a way of looking at it and doesn't tell the whole story unless you include phase information too. With just two sine waves you can synthesise a whole range of waveform shapes.
 
  • #35
The notes from a piano keyboard (and the majority of other sound sources) is not pure so the resultant is due to two separate sets of waves.If we choose to find the resultant we do indeed need to include information such as phase differences and amplitudes.This becomes a very thorny problem due to the fact that what actually arrives at the ear is,amongst other things,due to the geometry and structure of the room and the location of the listener.
What should be given more emphasis here is aknowledgement of the part that the brain plays in interpreting the signals it receives.To a large extent we learn to hear and I would bet that the OPs friend has such a sensitive "musical ear" that he/she could distinguish the two notes whether they were played in an anechoic chamber or a place such as the whispering gallery of St Pauls cathedral,which has a very long reverberation time.
 
  • #36
Hey guys I don't want to hijack this thread, however I remember I had read something on another forum about a very similar topic and a member said something that goes in counter of
atyy said:
Anyway, the ear performs a Fourier decomposition of the incoming sound into its component frequencies.
I myself do not understand most of the subject (so I'm here to give insights using other people since I'm not competent). The thread is http://forums.futura-sciences.com/physique/480015-serie-de-fourier-perception-auditive.html.
The member is LPFR and he said
Pour finir, l'oreille humaine n'est pas sensible à la phase des signaux mais uniquement à la puissance. Je me souviens qu'il a fallu faire la manip avec deux synthétiseurs (plus oscillo plus haut-parleur) pour convaincre un collègue que l'oreille humaine ne calculait pas ni la transformée ni la série de Fourier.
which would be something like
To finish, the human ear is nonsensitive to the phase of signals but is sensitive only to the power. I remember that we had to do the experiment with 2 synthesizers (aside with an oscilloscope and a speaker) to convince a colleague that the human ear does not calculate the Fourier transform nor the Fourier series.
There are some links in the mentioned thread (some of them in English). I'm not sure LPFR is right though but from the quality of all his posts and his experience, if he's wrong on this then there must be a "nasty" reason.
 
  • #37
fluidistic said:
Hey guys I don't want to hijack this thread, however I remember I had read something on another forum about a very similar topic and a member said something that goes in counter of
I myself do not understand most of the subject (so I'm here to give insights using other people since I'm not competent). The thread is http://forums.futura-sciences.com/physique/480015-serie-de-fourier-perception-auditive.html.
The member is LPFR and he said which would be something like
There are some links in the mentioned thread (some of them in English). I'm not sure LPFR is right though but from the quality of all his posts and his experience, if he's wrong on this then there must be a "nasty" reason.

The ear is not sensitive to phase. So you could add cos(w1t)+cos(w2t+\phi_A) and it would sound the same as cos(w1t)+cos(w2t+\phi_B) even though the two waveforms can look really different on an oscilloscope.

So the Fourier transform a(w) of the wave x(t) would be be determined up to a phase a(w)e^{i\phi(w)} which the ear cannot determine.

So in effect the ear would only measure |a(w)|2, which is the power spectrum by Parseval's theorem. So in that sense you can say the ear only measures power.
 
Last edited:
  • #38
I remember that one of the strategies they use when trying to get maximum perceived loudness on, for instance, radio broadcasts, is to mangle the phase in order to restrict the peaks of audio waveforms - thus allowing them to jack up the general level without exceeding 100% modulation. The opinion is that you can't hear the difference - but, by that time, the programme is so distorted and compressed that 'quality' can be ignored.

But there is the time of arrival information too. Most sounds we hear are not, in fact, continuous so they do not analyse into continuous sinusoids. There is a lot of extra information that the brain seems to get out of the received binaural sound. Stereo works largely on the basis of relative amplitudes from left and right speakers but people can get far more information than that about direction of real life sound sources. There's about 1ms time difference in times of arrival at right and left ears so I guess we are able to resolve that.
 
  • #39
fluidistic said:
Hey guys I don't want to hijack this thread, however I remember I had read something on another forum about a very similar topic and a member said something that goes in counter of
I myself do not understand most of the subject (so I'm here to give insights using other people since I'm not competent). The thread is http://forums.futura-sciences.com/physique/480015-serie-de-fourier-perception-auditive.html.
The member is LPFR and he said which would be something like
There are some links in the mentioned thread (some of them in English). I'm not sure LPFR is right though but from the quality of all his posts and his experience, if he's wrong on this then there must be a "nasty" reason.

He is talking about the perceptual ear, since he is using keyboards to demonstrate. The OP did ask about the perceptual ear, and the brain was mentioned early in this thread by several posters. The topic of phase is very complicated, but there certainly is a sense in which the perceptual ear is phase deaf. A simple example that shows that that has to be qualified is that white noise and a click both have the same Fourier frequencies, differing only in phase, and are clearly perceived as different.

However, we seem to have drifted to talking about the physical ear for the moment. That does perform a Fourier transform. A little more accurately, it is well-modelled as a bank of gammatone filters.
http://www.pdn.cam.ac.uk/groups/cnbh/aimmanual/BMM/gtfb.htm
http://www.dicklyon.com/tech/Hearing/APGF_Lyon_1996.pdf
http://www.tu-harburg.de/ti6/pub/diss/solbach/index.html
 
Last edited by a moderator:
  • #40
fluidistic said:
Hey guys I don't want to hijack this thread, however I remember I had read something on another forum about a very similar topic and a member said something that goes in counter of
I myself do not understand most of the subject (so I'm here to give insights using other people since I'm not competent). The thread is http://forums.futura-sciences.com/physique/480015-serie-de-fourier-perception-auditive.html.
The member is LPFR and he said which would be something like
There are some links in the mentioned thread (some of them in English). I'm not sure LPFR is right though but from the quality of all his posts and his experience, if he's wrong on this then there must be a "nasty" reason.

Yeah, it probably doesn't do exactly a Fourier transform in that the phase information is not preserved. However, it at least does read the power spectrum of the signal, if you will. In addition, the ear still does process some form of phase information in how it judges the positioning of sounds. It does this from the relative phase shift between ears that results in the time delay from the different path lengths between the source and the ear.

One of the developments of interest in hearing aids and cochlear implants is the preservation of binaural information. A problem with using old fashioned hearing aids is that the listener suffers from the "cocktail party" effect. If they are listening to a speaker in an environment populated by other sounds, they have trouble isolating the desired speaker (in effect they lose the person among the others in the cocktail party). One of the reasons for this was that the hearing aids removed the binaural information that allowed the brain to lock on to a source and filter other sounds out. One way they are working to get around this is by having hearing aids that communicate information between each other. Instead of only working as individual left and right hearing aids, they get the sounds from both left and right and perform some processing to help preserve binaural information.

So the ear does not recognize phase monoaurally but it does measure phase binaurally.
 

Similar threads

Replies
12
Views
3K
  • · Replies 27 ·
Replies
27
Views
4K
Replies
8
Views
1K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
5
Views
845
  • · Replies 4 ·
Replies
4
Views
1K
Replies
4
Views
3K