The first part of this scenario describes the "cocktail party effect": it is really amazing that we can use selective listening to tune out background voices and concentrate on our conversation. This effect was first studied by Cherry (1953) who found that this sort of listening is easier when the voices appear to be located in different places like you would experience in social situations. In the laboratory this could be mimicked by a dichotic listening task: while wearing headphones, the background conversation would be heard in one ear while the participant is asked to focus on a voice heard in their other ear. If, instead, all voices come from the same location this becomes much more difficult. The laboratory equivalent would be having all voices streamed into both sides of headphones.
The second part of the scenario and this week's meme illustrate later work done by Moray in 1959. Usually in dichotic listening tasks the participants are pretty good at repeating what they are instructed to listen to and are barely aware of what is being streamed into the other ear. Moray found an exception to this: many people are able to notice when their names are mentioned in the background speech that they have been instructed to ignore. Later research suggests that this is particularly true of people who are easily distracted and have poor working memories.
Some modern studies show us when the response to our name may develop and when this response may decline. To start, Newman (2005) performed a series of studies to determine the age at which babies start to pick out their names from background speech. Babies sat on their parents' laps and listened to a recording of three women speaking: throughout the entire recording one voice read passages from books; at the same time babies would hear a second voice saying their individual names alternating with a third voice saying similar names. This recording came out of a loudspeaker next to a red light that would go on when names were mentioned: so the red light served as the single "source" of the voices. Newman could tell if the babies were noticing a name if they looked at the red light when it was layered over the book passage.
She found that babies as young as five months showed some ability to pick out their names. This was because they looked at the light slightly longer when their names were overlaid, but only when their names were 10 decibels louder than the words from the book passage. Newman then demonstrated that it is around age one that young children no longer require their names to be that much louder than background speech to notice them. So this ability appears to develop in the first year of life, then is further honed up to adult ability.
Switching to the other end of the lifespan, Naveh-Benjamin, Maddox, Kilb, Thomas, Fine, Chen, and Cowan (2014) performed a series of studies comparing young adults to senior citizens on a dichotic listening task. Both age-groups were instructed to listen to the words streamed into one ear and to ignore that background words that were streamed into the other ear. (Although I doubt that any of them looked as cool wearing their headphones as Ruth Flowers did DJ'ing at age 72).
Naveh-Benjamin et al. wondered if older adults, who usually have poorer working memories due to aging, would perform like young adults who have poor working memories? Specifically: would they be more likely to notice when their names are mentioned in background speech that they are told to ignore? The results were surprising: in several variations of the study senior citizens were consistently less likely than poor-memory young adults to notice their names in these background words. In fact, they noticed their names less than even the high-memory young adults! This trend was not influenced by the older participants' individual working memory abilities, which ear the background speech was streamed into, or how quickly any of the words were paced.
Even more striking was the finding that the seniors showed very little notice of their names even when the task was changed so that they were instructed to listen to the recording that contained their names and ignore the speech streaming into the other ear! No wonder the title of this research article is, "Older adults do not notice their names..."!
Taken together, these studies suggest that our tendency to tune out or tune in is related to a number of cognitive processes. Newman suggested that infants may develop these abilities as their understanding of speech as a tool and their ability to selectively listen increase. Naveh-Benjamin et al. emphasized that they cannot determine what caused their results but they wagered that dichotic listening tasks require more brain power from older adults to concentrate on one thing and ignore another. So this extra "effort" may have produced their results. Clearly further research is required.
On a lighter note, if you are at a noisy party and gossiping about a person who is across the room - you are probably not going to get caught if that person is a baby or a senior citizen! But hopefully you will have more tact than Jerry and Elaine on "Seinfeld".
Further Reading:
The Newman (2005) and the Naveh-Benjamin et al. (2014) articles can be accessed at your local college library.
Here is a great article on the neuroscience behind the cocktail party effect by Golumbic et al. (2014).
A "Psychology Today" blog post by Liane Davey on ending the negative gossip habit.
No comments:
Post a Comment