How does the ear distinguish multiple different pitches?

In summary, the cochlea in the ear is organized by tone and can distinguish multiple pitches played simultaneously. This is due to the varying sizes of the hairs in the ear vibrating at different frequencies, causing the brain to perceive individual sounds. The ear essentially performs an analog Fourier transform in real time. However, in audio equipment, linear transformations are made to the signal to reproduce it with high fidelity. The ear does not work in the same way and can process sounds in a non-linear fashion. The speaker diaphragm does not need to analyze the signal, but rather just vibrate according to it. The cochlea can be compared to a piano with the dampers removed, where different strings
  • #1
Sophrosyne
128
21
A chord played on a piano is multiple different pitches, each at a different frequency, played simultaneously. The ear hears this as separate sounds. The cochlea of the ear is organized by tone (tonographic organization), from highest to lowest frequency. So a triad chord will stimulate three corresponding area of hair cells in the cochlea separately.

My question is: how do the three tones get perceived that way? Why not as a single complex wave? It seems that the combined air pressure wavefront generated from three different notes actually should combine to form a very complex wave pattern, through a sort of Fourier summation. Does the ear do some kind of "reverse" Fourier analysis to break up this complex wave pattern into its individual component waves?

This is even more puzzling when you listen to a whole symphony orchestra all playing together at the same time. You can still distinguish the strings from the trumpets from the tympany from the flutes from the triangle, etc... It can clearly still distinguish all the different pitches and timbres, with all their associated overtones, ranging from the tuba to the piccolo. But I can't even imagine how complex that combined wave of all these instruments would appear if you would put them all on the same graph. Somehow it seems the ear is still able to sort them out and hear them as separate sounds though.

And a related question regarding the stereo diaphragm: how does a single diaphgram vibrate to recreate all those different sounds, not to mention all the overtones? It must be vibrating as the sum of all those waves, in the same way the ear drum must be vibrating.

How?
 
Physics news on Phys.org
  • #2
The hairs in your ear are sized differently and resonate at different natural frequencies. A given hair vibrating tells the brain there is sound content and the corresponding frequency.

The ear essential performs an analog Fourier transform in real time.
 
  • #4
boneh3ad said:
The hairs in your ear are sized differently and resonate at different natural frequencies. A given hair vibrating tells the brain there is sound content and the corresponding frequency.

The ear essential performs an analog Fourier transform in real time.

I see. That makes sense. Thank you.

But then what about the diaphragm on the stereo that vibrates? I can't even imagine the level of complexity that a single diaphragm like that has to vibrate into transmit all the sounds and pitches in a piece of music.
 
  • #5
Sophrosyne said:
I see. That makes sense. Thank you.

But then what about the diaphragm on the stereo that vibrates? I can't even imagine the level of complexity that a single diaphragm like that has to vibrate into transmit all the sounds and pitches in a piece of music.
But the speaker diaphragm doesn't have to do any analysis. It just has to vibrate according to the input signal. One underlying assumption of the audio equipment is that you record a waveform and then reproduce it with a fidelity that is as high as you can make it (for the price).

To make that happen you try to make everything in the system - the microphones, amplifiers, recording and playback equipment, recording media, and speakers - behave linearly. That is to say, if you record a signal ##G(t) + H(t)## containing sounds from two different sources, then at each step of the process, the transformations you make to the signal have the property ##T(G(t) + H(t)) = aG(t) + aH(t)##, where ##a## is a change in amplitude.

Speakers are just one of many parts of the system that need to adhere to that constraint, and they do so only imperfectly and over a limited range of frequencies. That is why you use a woofer, a tweeter and a midrange speaker. One diaphragm generally cannot transform the signal into pressure waves in an approximately linear fashion over the whole range of audible frequencies.

Interesting, though, the ear does not seem to work that way. In certain cases, particularly with multiple voices singing, you can hear overtones (or undertones) that sound like an additional voice. That indicates that your ears and brain process sounds in a non-linear fashion.
 
  • Like
Likes Merlin3189
  • #6
Sophrosyne said:
I can't even imagine the level of complexity ...
You can look at the waveforms in audio software.
 
  • #7
boneh3ad said:
The hairs in your ear are sized differently and resonate at different natural frequencies. A given hair vibrating tells the brain there is sound content and the corresponding frequency.

The ear essential performs an analog Fourier transform in real time.
A very close analogy to the way the cochlea works would be a Piano with the dampers removed (actually, with them applied very lightly). If you shout at the piano then some of the strings will resonate but some won't. If you play a chord on a nearby piano, the same strings that were struck will resonate on the 'receiving' piano. You could imagine having a sensor on each of the 88 keys on the receiving piano and you could then display the chord that was played on the other piano. There are many more than 88 hairs on the cochlea and your brain is also aware to the variation in time of the 'amplitudes' of the received signals on the hairs.
"The ear essentialy performs an analog Fourier transform in real time" could imply that there is something more fundamental about the time variation of a sound (time domain representation) than the frequency content (frequency domain). In fact they are of equal 'status'. Indeed, before the invention of the oscilloscope, the frequency domain information was very important for representing music - sheet music and the pegs on a musical box are a mixture of frequency and time domain information and a chord played on the guitar of piano is just frequency domain.
 
  • Like
Likes pinball1970
  • #8
tnich said:
But the speaker diaphragm doesn't have to do any analysis. It just has to vibrate according to the input signal. One underlying assumption of the audio equipment is that you record a waveform and then reproduce it with a fidelity that is as high as you can make it (for the price).

To make that happen you try to make everything in the system - the microphones, amplifiers, recording and playback equipment, recording media, and speakers - behave linearly. That is to say, if you record a signal ##G(t) + H(t)## containing sounds from two different sources, then at each step of the process, the transformations you make to the signal have the property ##T(G(t) + H(t)) = aG(t) + aH(t)##, where ##a## is a change in amplitude.

Speakers are just one of many parts of the system that need to adhere to that constraint, and they do so only imperfectly and over a limited range of frequencies. That is why you use a woofer, a tweeter and a midrange speaker. One diaphragm generally cannot transform the signal into pressure waves in an approximately linear fashion over the whole range of audible frequencies.

Interesting, though, the ear does not seem to work that way. In certain cases, particularly with multiple voices singing, you can hear overtones (or undertones) that sound like an additional voice. That indicates that your ears and brain process sounds in a non-linear fashion.

This makes sense, thank you.

What is interesting is that the eardrum itself is a membrane which should have a fundamental vibrating frequency of its own, ie, if you tap on it like a tympany diaphragm, it should vibrate at a particular frequency (I would think it would be fairly high pitched, given how small it is). Does that mean the ear hears at that frequency better than other frequencies?
 
  • #9
Sophrosyne said:
This makes sense, thank you.

What is interesting is that the eardrum itself is a membrane which should have a fundamental vibrating frequency of its own, ie, if you tap on it like a tympany diaphragm, it should vibrate at a particular frequency (I would think it would be fairly high pitched, given how small it is). Does that mean the ear hears at that frequency better than other frequencies?
The eardrum is a membrane, but it is unlike a drumhead in that it is firmly connected to a series of little bones called ossicles. So it is not free to vibrate at its fundamental frequency.
 
  • #10
@Sophrosyne if you are interested in human sound mechanisms, check this out.



It took me a while to believe that the "polyphonic singing" is a real thing. Some of the examples are just flat amazing. This is just the first one I found in a Google search.
 
  • Like
Likes pinball1970
  • #11
Sophrosyne said:
A chord played on a piano is multiple different pitches, each at a different frequency, played simultaneously. The ear hears this as separate sounds. … My question is: how do the three tones get perceived that way? Why not as a single complex wave?

In my experience my ear is not a good frequency analyzer for continuous tones. Test: when listening to the sound of a square wave, I am unable to count the separate frequency components. And when listening to the mixture of two or three continuous pure tones, my ear experiences it primarily as a single sound. In my opinion, a chord played on a piano is not a fair test, because the hammers do not strike perfectly simultaneously, and the tones have different decay times.
 
  • Like
Likes pinball1970
  • #12
Sophrosyne said:
What is interesting is that the eardrum itself is a membrane which should have a fundamental vibrating frequency of its own, ie, if you tap on it like a tympany diaphragm, it should vibrate at a particular frequency (I would think it would be fairly high pitched, given how small it is). Does that mean the ear hears at that frequency better than other frequencies?
Firstly, the membrane has many modes of vibration, which could 'colour' the perceived sound and secondly, the system has been 'engineered' to pass the received sound power as efficiently as possible and over a very wide frequency range (around ten octaves). This implies that the resonances are damped by the air in the ear canal at one end and the cochlea at the other end of the chain. (A significant resonance will only occur in a membrane if the energy is allowed to build up on it and that doesn't happen in the ear)
A loudspeaker will have a large diaphragm (cone) and the cabinet (sides and internal void) will have natural resonances. This can sound dreadful unless the cabinet is damped quite drastically. Same idea as the sound transmission in the ear.
spareine said:
In my experience my ear is not a good frequency analyzer for continuous tones. Test: when listening to the sound of a square wave, I am unable to count the separate frequency components. And when listening to the mixture of two or three continuous pure tones, my ear experiences it primarily as a single sound. In my opinion, a chord played on a piano is not a fair test, because the hammers do not strike perfectly simultaneously, and the tones have different decay times.
The analysis is very clever indeed and we take all sorts of clues about the sound we hear when we identify the content and the source. We do frequency and time analysis of the waveform. Actually, the time domain representation of any sound that you see on a 'scope' means very little to the casual observer and it needs a lot of training and experience to get much about the frequencies and waveforms from such a picture. Show someone a spectrum analyser display and they could easily determine the notes and the chords (with a little help). Otoh, if you look at a much slower scan rate, the time domain display can show the syllables and rhythms of speech and music.
 
  • #13
Sophrosyne said:
And a related question regarding the stereo diaphragm: how does a single diaphgram vibrate to recreate all those different sounds, not to mention all the overtones? It must be vibrating as the sum of all those waves...
Yes: there is only one, albeit very complicated wave. The principle at work here is called "superposition":
http://www.acs.psu.edu/drussell/demos/superposition/superposition.html

Any two or more waves can be added together into a single, more complicated wave. This is how sound transmission and playback systems work (though different sounds are first recorded on individual channels before being combined).

Also, an object has one natural frequency, but it can be driven to vibrate at any frequency as long as you keep applying a force to it. They are not limited to providing only their natural frequency.
 
  • Like
Likes Sophrosyne and sophiecentaur
  • #14
tnich said:
That indicates that your ears and brain process sounds in a non-linear fashion.
And why not? You have to remember that the way we process information about the world about us is nothing like the way a Sound Recorder or a TV camera works. We evolved and were not designed by an Engineer and there are many apparently lunatic design aspects to the human body.
A couple of years ago, I had a Sonocardiogram and I was looking at an image of my heart happily pumping away with the valves apparently being held in place by lots of flimsy looking strings. It looked like a model a child could have put together with a supermarket plastic bag and bits of string. I had to look at it for a long time before I could believe that my life had been relying on that mechanism for many decades. It really works very well (even my slightly out of condition model)!
So trying to apply a conventional design critique to any part of us is going to confuse us. It is always constructed in a different way than we would build a replacement. Non linearity is not a problem if you are appropriately analysing all the many signal channels that our senses use.
 
  • #15
Sophrosyne said:
A chord played on a piano is multiple different pitches, each at a different frequency, played simultaneously. The ear hears this as separate sounds. The cochlea of the ear is organized by tone (tonographic organization), from highest to lowest frequency. So a triad chord will stimulate three corresponding area of hair cells in the cochlea separately.

My question is: how do the three tones get perceived that way? Why not as a single complex wave? It seems that the combined air pressure wavefront generated from three different notes actually should combine to form a very complex wave pattern, through a sort of Fourier summation. Does the ear do some kind of "reverse" Fourier analysis to break up this complex wave pattern into its individual component waves?

This is even more puzzling when you listen to a whole symphony orchestra all playing together at the same time. You can still distinguish the strings from the trumpets from the tympany from the flutes from the triangle, etc... It can clearly still distinguish all the different pitches and timbres, with all their associated overtones, ranging from the tuba to the piccolo. But I can't even imagine how complex that combined wave of all these instruments would appear if you would put them all on the same graph. Somehow it seems the ear is still able to sort them out and hear them as separate sounds though.

And a related question regarding the stereo diaphragm: how does a single diaphgram vibrate to recreate all those different sounds, not to mention all the overtones? It must be vibrating as the sum of all those waves, in the same way the ear drum must be vibrating.

How?
The key is:
"So a triad chord will stimulate three corresponding area of hair cells in the cochlea separately.
How do the three tones get perceived that way?"

Part of the ear is detecting the three pitches simultaneously, and the brain then interprets those sounds.
Generally the lowest pitch is what our brain interprets the pitch to be, then the other pitches provide a "quality of sound" property. Indeed even a single note on a piano consists of a number of frequencies (fundamental and overtones), and it is those frequencies and their relative intensity that let's us recognise the sound as coming from a piano (provided you have previously heard a piano before and seen it and heard it at the same time)
Indeed:
If a double bass is recorded, then played back through a cheap sound system that simply cannot reproduce the fundamental frequency, just the overtones, our brain can recognise the "incomplete" set of "fundamental & overtones" and "fool" the brain into "hearing" the lower note that was originally produced.
 
  • #16
PeterO said:
our brain can recognise the "incomplete" set of "fundamental & overtones" and "fool" the brain into "hearing" the lower note that was originally produced.
Our brains are extremely good at making the most of limited information. It copes amazingly well with low levels of lighting and poor listening conditions. Of course, it sometimes doesn't get it right but when you think the basic system evolved for the purposes of being an early hunting hominid in jungle or savannah conditions, it does pretty well.
 
  • #17
phinds said:
@Sophrosyne if you are interested in human sound mechanisms, check this out.



It took me a while to believe that the "polyphonic singing" is a real thing. Some of the examples are just flat amazing. This is just the first one I found in a Google search.


That is nothing short of amazing.
 
  • #18
phinds said:
@Sophrosyne if you are interested in human sound mechanisms, check this out.



It took me a while to believe that the "polyphonic singing" is a real thing. Some of the examples are just flat amazing. This is just the first one I found in a Google search.


I always thought about this note from Paul Mac. Its either a top A or G - is polyphonic? From 3.08-3.11

 
  • #19
spareine said:
In my experience my ear is not a good frequency analyzer for continuous tones. Test: when listening to the sound of a square wave, I am unable to count the separate frequency components. And when listening to the mixture of two or three continuous pure tones, my ear experiences it primarily as a single sound. In my opinion, a chord played on a piano is not a fair test, because the hammers do not strike perfectly simultaneously, and the tones have different decay times.

Agree - strings are better for that "block" sound or something else that has a low percussive element to it.
 

1. How does the ear detect different pitches?

The ear detects different pitches through the vibration of the eardrum caused by sound waves. These vibrations are then transmitted to the cochlea in the inner ear.

2. What is the role of the cochlea in distinguishing pitches?

The cochlea contains tiny hair cells that are responsible for detecting different frequencies of sound. Each hair cell is tuned to a specific frequency, allowing the brain to interpret different pitches.

3. How does the brain interpret different pitches?

The brain interprets different pitches by receiving signals from the hair cells in the cochlea. These signals are then sent to the auditory cortex, where they are processed and translated into different pitches.

4. Can the ear distinguish between very similar pitches?

Yes, the ear is capable of distinguishing between very similar pitches. This is due to the different hair cells in the cochlea that are sensitive to different frequencies, allowing for a precise interpretation of sound.

5. Are there any factors that can affect the ear's ability to distinguish pitches?

Yes, there are several factors that can affect the ear's ability to distinguish pitches, such as age, exposure to loud noises, and certain medical conditions. These factors can impact the sensitivity of the hair cells in the cochlea, leading to difficulty in distinguishing between pitches.

Similar threads

  • Biology and Medical
Replies
6
Views
419
  • Classical Physics
Replies
4
Views
899
  • Introductory Physics Homework Help
Replies
3
Views
202
Replies
2
Views
8K
Replies
58
Views
6K
Replies
5
Views
948
  • Science Fiction and Fantasy Media
Replies
5
Views
75
  • Other Physics Topics
Replies
3
Views
2K
Replies
4
Views
2K
Back
Top