Untangling spoken words
"A listener with an auditory processing disorder (APD) — a broad and complex group of disorders that make it difficult to process the spoken language even though hearing acuity is fine — will find it hard to extract the different sounds and focus on one sound source"
Being able to listen to two conversations at once: an intriguing one filled with gossip and one not so interesting but directed to the listener — might be an inherited skill, according to research by the National Institutes of Health.
The listener uses changes in pitch, the direction of sound, time delays between different sounds, and the onset, or the starting and stopping, of speech to separate the two conversations, says Didier Depireux, assistant professor of anatomy and neurobiology at the University of Maryland School of Medicine in
A listener with an auditory processing disorder (APD) — a broad and complex group of disorders that make it difficult to process the spoken language even though hearing acuity is fine — will find it hard to extract the different sounds and focus on one sound source, says Mr. Depireux, who holds a doctorate in physics.
“Kids who have these disorders can perceive sounds, but they have difficulty putting the sounds together,” Mr. Depireux says.
An NIH study on the ability to listen to two conversations at once may help researchers better understand APD. The study, conducted from 2002 to 2005 by the National Institute on Deafness and Other Communication Disorders (NIDCD), looked at 194 same-sex pairs of fraternal and identical twins to determine whether auditory processing skills are inherited.
The findings of the study, which were published last year, were based on the results of five tests that identify auditory processing difficulties in children and adults. In four of the tests, the researchers found a significantly higher correlation among identical twins than fraternal twins, indicating that auditory processing ability is genetic. Three of the tests had the twins listen to two different onesyllable words or nonsense syllables simultaneously in the right and left ears, while the fourth sped up words played into the right ear.
According to the study, dichotic listening ability, or the ability to identify words and nonsense syllables entering both ears, is 73 percent because of genetic differences, the same as other inherited traits such as height and type 1 diabetes, as stated on the NIDCD’s Web site ( .gov).”It’s one thing to prove something is inheritable. It’s another to find the gene,” says Chris Zalewski, clinical research audiologist for NIDCD, adding that the next step for researchers is to identify the particular gene or genes responsible for dichotic listening ability.
APD includes a variety of difficulties with processing spoken language, including distinguishing spoken messages from background noise, discriminating among different sounds, localizing the source of a sound, and sequencing sounds into words, says Maxine Young, an audiologist with a private practice in Broomall, Pa., in her white paper “Recognizing and Treating Children With Central Auditory Processing Disorders.”
“When the signal from the peripheral part of the ear enters into the central nervous system, there’s some degradation of that signal, so what people with processing problems interpret is different than what’s actually been said,” says Ms. Young, an adjunct faculty member who teaches about auditory processing disorders at the Pennsylvania College of Optometry in Elkins Park, Pa. APD can make it difficult to learn to read, which is an auditory task built on learning sounds, and to understand a list of instructions that have to be remembered with the critical words processed, Ms. Young says.
“These kids work harder; they have to use more energy, like with listening to an accent, to figure out speech,” she says.
The introduction of new vocabulary or topics can cause an auditory overload, as can a list of instructions, when they’re trying to process an ongoing conversation, Ms. Young says.
“They seem to tune out when there’s too much oral language,” she says. “They appear to have difficulty paying attention, but this is not attention deficit disorder.”
Those with APD may frequently say “What?” or “Huh?” after hearing but not understanding a spoken message, says Teri James Bellis, associate professor and chairwoman of the department of communication disorders at the University of South Dakota in Vermillion. She is the author of several books, including “When the Brain Can’t Hear: Unraveling the Mystery of Auditory Processing Disorder,” published in 2002.
“Auditory processing is the brain’s ability, the ability of the central nervous system, to use incoming auditory information to represent it accurately,” says Ms. James Bellis, who holds a doctorate in audiology and hearing sciences with specialty certification in language and cognition. “Broadly defined, it’s the efficiency and effectiveness by which the auditory system uses auditory information.
Auditory processing is a complex process from the time the ear hears sound to when the brain processes and interprets what was said, Mr. Zalewski says. “The ear receives sound and sends neural signals to the brain in a complex fashion where we interpret sound and make it meaningful,” he says. Sound waves in the environment are converted from mechanical energy to neural pulses in the inner ear, says Carmen C. Brewer, chief of audiology in the otolaryngology branch at NIDCD.
“It’s not a straight-up signal from the ear to the brain,” says Ms. Brewer, who holds a doctorate in audiology.
The signals are crossed in the brain as most are sent to the hemisphere opposite the ear receiving the sound, allowing for comparison of the signals at the brain stem and cortex levels, she says.
“It can provide a little bit of redundancy to an auditory signal,” Mr. Zalewski says, adding that the redundancy allows for analysis of the sound, such as its localization and lateralization, or simply where the sound is coming from. Those with APD have difficulty with this analysis, he says.
“APD is a problem in how complex sound like speech is processed. This processing problem can occur at one or multiple points along the auditory pathway,” Mr. Zalewski says.