Selective Hearing: How Brain & Hearing Work Together

  • Medical
  • Thread starter Enix
  • Start date
  • Tags
    Hearing
In summary: This was despite the fact that the subject had been listening to it for the entire time. In other words, the subjects were passively listening to the message while their attention was otherwise focused elsewhere. Cherry hypothesized that the subjects were only able to recall a very small amount about the message because it had been filtered out by their active listening. Cherry's findings showed that the brain is able to selectively process information while it is actively listening.
  • #1
Enix
10
0
As I was sitting at a restaurant, surrounded by conversations going on at nearby tables, it came into my thoughts how fascinating the brain and hearing work together. By this, I mean here are various conversations going on at one time, all equal in my ability to hear each one as well as the other. Yet, by my choice I can completely shut off from my hearing all conversations except the one I choose to hear, and in a split second shut off the one I was listening to and hear another conversation I had previously chose to shut off. Obviously this holds true to so many sounds we hear with our ears yet choose by our brain to hear what we select, as such with an orchestra. All instruments playing at the same time, all at the same pitch etc., yet by my choice I can choose to just hear the instrument I want to hear.
 
Biology news on Phys.org
  • #2
Enix said:
As I was sitting at a restaurant, surrounded by conversations going on at nearby tables, it came into my thoughts how fascinating the brain and hearing work together. By this, I mean here are various conversations going on at one time, all equal in my ability to hear each one as well as the other. Yet, by my choice I can completely shut off from my hearing all conversations except the one I choose to hear, and in a split second shut off the one I was listening to and hear another conversation I had previously chose to shut off. Obviously this holds true to so many sounds we hear with our ears yet choose by our brain to hear what we select, as such with an orchestra. All instruments playing at the same time, all at the same pitch etc., yet by my choice I can choose to just hear the instrument I want to hear.

Yes, its pretty interesting isn't it. Its the same with sight, touch, taste and smell. The reason is that human body is capable of receiving many simultaneous inputs, but that the brain can only process a select few at anyone moment.
 
  • #3
Chaos' lil bro Order said:
Yes, its pretty interesting isn't it. Its the same with sight, touch, taste and smell. The reason is that human body is capable of receiving many simultaneous inputs, but that the brain can only process a select few at anyone moment.

But, through choosing which inputs to process, isn't that a process in itself?
 
  • #4
Good point. I'm guessing it works like several streams that are side by side. At the end of the streams is one giant dam that blocks all the streams but the one you choose to let throught. I know this is a crude example, but I suspect its pretty accurate. Of course each stream, trickles slightly through the dam, which is why you can always hear each background sound. But the main sound you focus on, goes through the damn unimpeded.
 
  • #5
That sounds like a reasonable theory. I don't see why or how it could be anything else. If it were something else, that would be truly intersting to hear about.

It's strange to wonder how your mind chooses which "stream" to focus on. Since it would have to analyze the current input, and compare it to the previous one. Then calculate how close it is the previously accepted input in order to accept/deny its focus.

It seems appropriate to reference this song I just made yesterday. It's half Ambience, half Dance, half Trance. The reason I bring it up, is because it contains around 40 instruments, almost all of which play simultaneously at parts.

As expected, while listening casually I only pick up the main riff, piano and snares. But really, there's so much else going on if you try to listen to the background instead.

Direct Link : http://www.soundclick.com/util/getplayer.m3u?id=4728958&q=hi
If It Doesn't Work : http://www.soundclick.com/bands/songInfo.cfm?bandID=625477&songID=4728958

Interesting stuff.
 
  • #8
some tidbits that may or may not be of interest..

Back in the early 1950's, there was a researcher named E.C. Cherry who was curious about what kinds of things people remember when they are hearing a conversation but not attending to it. He did a series of dichotic listening tests in which he had subjects wear headphones with different messages playing in each ear. He gave subjects the task of "shadowing" or repeating back aloud the message in one ear and then later asked what they recalled about the message playing in the unattended (non-shadowed) ear. What he found is that there were very few things that people could recall from the unattended ear message - and the things that were remembered were physical properties of the message, but not semantic details. Subjects noticed, for instance, when the gender of the speaker changed or when the speech was interrupted with tones, but not when the speaker's language changed or when the speech was played in reverse.

Ever since then, there's been ongoing debate about what gets through the attention filter, and when, and how much processing it gets. Early selection theories propose that only one message is selected (on the basis of physical characteristics) for full semantic processing and everything else is lost. Late filter theories propose that all messages are processed semantically, but only one is chosen for response. Attenuation theories straddle the fence. They maintain that one message gets selected for full processing, and everything else get partial processsing.
 
  • #9
I don't think it's very likely that all messages get processed semantically. The brain must have limited resources, so possibly the input with the highest volume or urgency overrides other inputs in the semantic-oriented areas of the brain.
As input signals propagate through the brain they set the brain state (potential across neurons). The strongest signals leave their corresponding state in more areas of the brain, so, as neurons go on firing to a new state from the old state, the input that had the bigger impact in the old brain state is the one who will gain the focus, gradually phasing out the impact of the remaining input sources until the shared resources are released.
This is just an opinion of course, but i think that, though all inputs are processed to some degree, only a single input is semantically processed at a time.
 
  • #10
I've noticed for listening, I am "left eared". Which ear do you hold the phone to? Can you pay attention as well if you hold the phone to the other ear?

We do a lot of our training at work online. I have a head set with two earphones, but find I have better comprehension if I just hold one earphone to my left ear instead of using both ears with the headset.
 
Last edited:
  • #11
selfAdjoint said:
By chance Jennifer Ouelette has a post on this topic today. She quotes some interesting research. http://twistedphysics.typepad.com/cocktail_party_physics/2006/12/party_girl.html
She uses an interesting metaphor relative to our selective hearing; that of tuning our neurons to specific voices, thereby tuning out others.

As students of physics, we may relate to this concept of tuning as it applies to a filter. A filter tuned to some center freqency attenuates (selectively ignores) adjacent frequencies.

MIH also describes the filter analogy and I believe is quite accurate. In analyzing our brain function, it is useful to model this effect as a filter, in that the end result "selective attention" is exactly what we would expect a filter to do. The actual mechanisms have not been elucidated (as selfAdjoint's reference also alludes).

As Chaos' lil Bro points out, this mechanism is true for our other senses as well. For example "vision". Recall a time you were in conversation with someone face-to-face in a room. How much were you paying attention to the color of a lampshade, or the shape of a table in the that room, or even people who may have walked by. This neural filtering mechanism does a nice job.:rolleyes:
 
Last edited:
  • #12
Evo said:
I've noticed for listening, I am "left eared". Which ear do you hold the phone to? Can you pay attention as well if you hold the phone to the other ear?

Interesting you wonder about the ability to "pay attention", with ear preference. This http://ezinearticles.com/?Right-Brain,-Left-Brain&id=43531 points out that "auditory stimulus in your left ear comes up through the brain stem over to your right brain". He goes on to discuss that amoung the many right brained activities, is "holding the attention span".
 
Last edited by a moderator:
  • #13
Sane said:
It seems appropriate to reference this song I just made yesterday. It's half Ambience, half Dance, half Trance. The reason I bring it up, is because it contains around 40 instruments, almost all of which play simultaneously at parts.

As expected, while listening casually I only pick up the main riff, piano and snares. But really, there's so much else going on if you try to listen to the background instead.
Listening to music is a special activity in audio stimuli, for we experience music on several levels. Famous composer, http://www.westminster.edu/staff/brennie/WDGroup4/what2listen4.htm elequently described these aspects of listening to music. He categorizes them into three planes (sensual, expressive and sheerly musical).
 
Last edited:
  • #14
Studies into how we process vision, often using brain damaged people, have indicated that when we look at a scene we look at a few key points and then fill in the rest through memory. We then continue looking for changes and interpolate.
Using cameras it is possible to see how much time is spent looking at moving objects and how much checking on static objects, and it is very little.
Tests on racing drivers show that they actually drive looking mainly at the edge of the track and rarely look around.

So getting back to the point, does hearing work in the same way. We know what our home language words sound like so do we just pick out patterns from the mush and asign them to a memory of what the sound should be like?

That could explain why you mishear words rather than fail to hear, and possible goes towards why women can pick out a tune by the mearest hint of a bass line in a crowded bar when men can't hear a thing.

In a noisy room with two different conversations we would link together the words that made sensible sentences, but could you do it if two equally audible conversations about the same subject were received into the same ear at similar pitch without concentrating to seperating the voices rather than the words?
 
  • #15
Math Is Hard said:
some tidbits that may or may not be of interest..

Back in the early 1950's, there was a researcher named E.C. Cherry who was curious about what kinds of things people remember when they are hearing a conversation but not attending to it. He did a series of dichotic listening tests in which he had subjects wear headphones with different messages playing in each ear. He gave subjects the task of "shadowing" or repeating back aloud the message in one ear and then later asked what they recalled about the message playing in the unattended (non-shadowed) ear. What he found is that there were very few things that people could recall from the unattended ear message - and the things that were remembered were physical properties of the message, but not semantic details. Subjects noticed, for instance, when the gender of the speaker changed or when the speech was interrupted with tones, but not when the speaker's language changed or when the speech was played in reverse.

Ever since then, there's been ongoing debate about what gets through the attention filter, and when, and how much processing it gets. Early selection theories propose that only one message is selected (on the basis of physical characteristics) for full semantic processing and everything else is lost. Late filter theories propose that all messages are processed semantically, but only one is chosen for response. Attenuation theories straddle the fence. They maintain that one message gets selected for full processing, and everything else get partial processsing.

Don't be so modest, your example is of high interest and is well on topic.

Maybe you've shared in this experience that happens to me from time to time. I am working at my computer and my son is watching TV with the audio level at a threshold where I can only semantically process the words if I bare my attention onto them. My peripheral hearing cannot pick out the semantic meanings, but it can hear the iambic pentameter (spelled wrongly I"m sure) and it can hear the tones and gaps quite well.

I noticed this in more clarity when my son turned to a french channel and I, intent on my work, gave it only my peripheral attention and just assumed it was english. Then I finished my work and turned to the tv to hear it as french. This echos your nice post about Cherry's work.
 
  • #16
Chaos' lil bro Order said:
Don't be so modest, your example is of high interest and is well on topic.

Maybe you've shared in this experience that happens to me from time to time. I am working at my computer and my son is watching TV with the audio level at a threshold where I can only semantically process the words if I bare my attention onto them. My peripheral hearing cannot pick out the semantic meanings, but it can hear the iambic pentameter (spelled wrongly I"m sure) and it can hear the tones and gaps quite well.

I noticed this in more clarity when my son turned to a french channel and I, intent on my work, gave it only my peripheral attention and just assumed it was english. Then I finished my work and turned to the tv to hear it as french. This echos your nice post about Cherry's work.
gee, thanks:redface:

I am curious as to whether or not you are bilingual (English and French, specifically). There is another member here who is fluent in both languages, and lives in a country where both are often spoken, and has mentioned that he frequently switches between both languages in conversation without noticing.

Personally, I have noticed that if I overhear people speaking French, it draws my attention, and I become aware of the language. This may be because I have studied the language later in life and only certain words have semantic links for me. Perhaps there is a trigger in my brain that notices a mismatch in the spoken words versus the language I am currently working in. Or it may be that I was so indoctrinated from my French classes to "pay attention!" that this still has an effect on me whenever I hear French.:smile:

I suspect that for people with a weak knowledge of French (like myself)there is a semantic trigger when hearing the recognizable words from the language, and also a small time delay from the word/meaning incongruence, something like we see in the http://en.wikipedia.org/wiki/Stroop_effect" . The delay is enough to be noticeable, bringing it to conscious awareness where we try to make sense of it.

For those fluent in English and French, this delay may be so tiny that is unnoticeable (because the switching is fast and automatic even in conscious conversation), and for non-French speakers, no delay is present because the words are not recognizable (semantic linking is never attempted, thus no incongruence). So no attention may be called to an unattended language change in either case.
 
Last edited by a moderator:
  • #17
Math Is Hard said:
gee, thanks:redface:

I am curious as to whether or not you are bilingual (English and French, specifically). There is another member here who is fluent in both languages, and lives in a country where both are often spoken, and has mentioned that he frequently switches between both languages in conversation without noticing.

Personally, I have noticed that if I overhear people speaking French, it draws my attention, and I become aware of the language. This may be because I have studied the language later in life and only certain words have semantic links for me. Perhaps there is a trigger in my brain that notices a mismatch in the spoken words versus the language I am currently working in. Or it may be that I was so indoctrinated from my French classes to "pay attention!" that this still has an effect on me whenever I hear French.:smile:

I suspect that for people with a weak knowledge of French (like myself)there is a semantic trigger when hearing the recognizable words from the language, and also a small time delay from the word/meaning incongruence, something like we see in the http://en.wikipedia.org/wiki/Stroop_effect" . The delay is enough to be noticeable, bringing it to conscious awareness where we try to make sense of it.

For those fluent in English and French, this delay may be so tiny that is unnoticeable (because the switching is fast and automatic even in conscious conversation), and for non-French speakers, no delay is present because the words are not recognizable (semantic linking is never attempted, thus no incongruence). So no attention may be called to an unattended language change in either case.


I'm not bilingual. I can only speak basic french, perhaps 1,000 words. Are you getting at the possibility that I may have noticed the TV channel was french if I heard a very recognizable french word like, 'Bonjour'? That's a good idea, but I can assure you that there were many recognizable french words that I didn't pick up with my peripheral attention, since it was a children's show with basic french words (the only kind I know:smile: )


You raise an interesting point about true bilinguals mixing and matching both languages into one sentence. It strikes me as quite similar to a person who is very dextrous in both printing and writing styles. When they are making a quick note, they often combine the two styles into a half printing, half writing shorthand style. In this case its probably done to improve speed. I find writing is faster for most lower case letters and printing is faster for most upper case letters. When making a quick note, I'll switch between the styles unconsciously and upon reflection, its easy to see that most of the switches made (probably 90%), increased the speed of the notemaking. Of course there is about 10% of the switching that slows down the notemaking, but these can be attributed mainly to letter fetishes. By this is mean, I much prefer to write the letter 'el' than to print the letter 'el', because of the aesthetics of the written loop 'el'. as opposed to the skinny printed 'el'.
 
Last edited by a moderator:
  • #18
Selective hearing/tuning out

Oddly too, is how one such as myself (and I am SURE I am not alone) can have someone talking directly at you, and can completely tune out that person. And of course, how this most often is a negative discussion which our self REALLY does not want to hear. Yet, the volume we are receiving from the discussion is no lower than one which is a more pleasant/positive discussion, which we most often hear quite well.
 

What is selective hearing?

Selective hearing is the ability to focus on specific sounds while filtering out others. It is a complex process that involves the brain and the auditory system working together.

How does the brain and hearing work together in selective hearing?

The brain plays a crucial role in selective hearing by processing and interpreting the sounds received by the ears. It filters and prioritizes certain sounds based on their importance and relevance.

What are some factors that affect selective hearing?

Several factors can impact selective hearing, including age, attention, and cognitive abilities. As we age, our hearing abilities may decline, making it more challenging to filter out sounds. Similarly, our ability to pay attention or multitask can also affect selective hearing.

Can selective hearing be improved or trained?

Yes, selective hearing can be improved through training and practice. Activities that involve listening to specific sounds amidst background noise, such as playing music or participating in a conversation, can help improve selective hearing abilities.

Are there any disadvantages to selective hearing?

While selective hearing can be beneficial in certain situations, it can also have its downsides. For example, it can lead to misunderstandings or missing important information if the brain filters out relevant sounds. Additionally, relying too heavily on selective hearing can also hinder overall hearing abilities.

Similar threads

  • Biology and Medical
Replies
6
Views
392
Replies
2
Views
2K
  • Biology and Medical
Replies
22
Views
3K
Replies
2
Views
2K
  • DIY Projects
Replies
7
Views
2K
  • Other Physics Topics
Replies
18
Views
2K
Replies
6
Views
3K
Replies
8
Views
5K
Back
Top