PAETE.ORG FORUMS
Paetenians Home on the Net

HOME | ABOUT PAETE | USAP PAETE MUNISIPYO  | MEMBERS ONLY  | PICTORIAL PAETE | SINING PAETE  | LINKS  |

FORUM GUIDELINES
please read before posting

USAP PAETE Forum Index USAP PAETE
Discussion Forums for the people of Paete, Laguna, Philippines
 
 FAQFAQ   SearchSearch    UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

(Anatomy) Hearing: Poor Hearing May Cause Poor Memory
Goto page 1, 2  Next
 
Post new topic   Reply to topic   printer-friendly view    USAP PAETE Forum Index -> Science Lessons Forum
View previous topic :: View next topic  
Author Message
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Sun Dec 11, 2005 10:32 am    Post subject: (Anatomy) Hearing: Poor Hearing May Cause Poor Memory Reply with quote






Poor hearing may cause poor memory
Brandeis University
Released on: August 30, 2005
Contact: Laura Gardner 781-736-4204 or gardner@brandeis.edu

Brandeis researchers say older people suffering a hearing loss might also lose the ability to remember spoken language.

The researchers said older adults with mild to moderate hearing loss might expend so much cognitive energy on hearing accurately, their ability to remember spoken language suffers as a result.

The study showed even when older adults could hear words well enough to repeat them, their ability to memorize and remember the words was poorer when compared with other individuals of the same age who had good hearing.

"There are subtle effects of hearing loss on memory and cognitive function in older adults," said Arthur Wingfield, the Nancy Lurie Marks Professor of Neuroscience and director of the Volen National Center for Complex Systems. "This study is a wake-up call to anyone who works with older people, including healthcare professionals, to be especially sensitive to how hearing loss can affect cognitive function."

He suggested individuals who interact with older people with some hearing loss could modify how they speak by speaking clearly and pausing after clauses, or chunks of meaning, not necessarily slowing down speech dramatically. The research appears in the journal Current Directions in Psychological Science.

*************************************************************

Questions to explore further this topic:

What is the ear?

http://faculty.washington.edu/chudler/bigear.html
http://kidshealth.org/kid/body/ear_noSW.html
http://clerccenter.gallaudet.e.....7/567.html

How does hearing work?

http://science.howstuffworks.com/hearing.htm
http://www.nidcd.nih.gov/healt.....asp#travel

How does one take care of the ear?

http://www.michdhh.org/health_care/ear_care.html

Does the ear do more than hearing?

http://www.medicinenet.com/scr.....ekey=21685

How is hearing related to memory?

http://interact.uoregon.edu/Me.....rlids.html

What is hearing loss?

http://www.nlm.nih.gov/medline.....0_no_0.htm
http://www.nidcd.nih.gov/health/hearing/older.asp
http://nihseniorhealth.gov/hearingloss/toc.html

What are ear infections?

http://www.nidcd.nih.gov/healt.....smedia.asp

Can noise damage the ear?

http://www.nidcd.nih.gov/health/hearing/noise.asp

How loud is too loud?

http://www.nidcd.nih.gov/healt.....ecibel.asp
http://www.nidcd.nih.gov/healt.....1.asp#loud

GAMES

http://www.brainconnection.com/teasers/
http://www.kidsplanet.org/games/js/whoami.html
http://www.pediatric-ent.com/kids/puzzle.htm
http://www.playmusic.org/stage.html
http://www.lhh.org/noise/funquiz.htm


Last edited by adedios on Sat Jan 27, 2007 4:31 pm; edited 4 times in total
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Mon Dec 26, 2005 11:26 am    Post subject: Say no to New Year noise Reply with quote

Tuesday, December 27, 2005
Say no to New Year noise

The hearing dangers of prolonged exposure to firecrackers

By Ayn Veronica L. de Jesus

AMONG the many celebrations around the world, New Year festivities mark one of the most exciting times of the year. It is an extension of the Christmas holiday spirit, yet a celebration of its own.

Enjoy the season of fireworks
with a measure of caution
by protecting your ears.


As the year turns into the next, we celebrate it with fireworks and pyrotechnics, without which the festivities would not be complete. However, the fireworks inevitably bring with them some hazards not only to our fingers, but also to our ears.

A fraction of a second, shorter than it takes to form a thought, is all it takes to damage the ears and lose the hearing for life.

Many people take this fact for granted.

According to a study done in Germany after New Year’s Day of 1999/2002 published in the European Archives of Otorhinolaryngology (2002), one in 10,000 people annually suffer permanent hearing damage because of fireworks.

The same study revealed that young adults under the age of 25 are three times more likely to suffer fireworks-related hearing damage because of the constant exposure to loud rock music and portable radios with earphones.

Maximum tolerance

A burst of intense sound such as an explosion, can cause immediate hearing loss. But long-term hearing loss occurs with time, after prolonged exposure to loud noise. It may creep up on you that you may realize it only when sounds become muffled or distorted.

Sound is measured in decibels, and the normal sound that humans can tolerate is about 60 decibels, the sound of a normal conversation. Experts agree that continual exposure to more than 85 decibels is already dangerous. Any noise level above 120 decibels potentially causes damage to the cochlea area of the ear.

A rock concert is 115 decibels and doctors recommend that our ears can tolerate only 15 minutes of this without ear protection. The blast of a gun, the roar of a jet engine, and fireworks can reach up to 155 decibels. At no time should we be exposed to these sounds without ear protection.

Also, the closer you are to the source of intense noise, the more damaging it is. If after exposure to loud noises you notice a pain or ringing sound in your ears, you can’t hear someone from two feet away, sounds around you seem muffled, and that you must raise your voice to be heard, you might well be suffering from hearing damage already.

And if you think you’ve grown accustomed to loud noises, then it’s also a sign you’re hearing has been damaged.

Moderation is the key

Protecting your ears doesn’t mean going to the mountains over the New Year to stay away from the noise. You don’t have to miss the merrymaking. There are several measures you can take to make sure you don’t suffer short- or long-term hearing loss.

A few days before lighting up those firecrackers, you can buy earplugs or earmuffs.

Earplugs are small inserts that are fitted into the outer ear canal. To effectively keep out the noise, the earplugs must be snugly sealed so the entire circumference of the ear canal is blocked. An ill-fitting, well-worn or dirty earplug may not seal out the noise and just irritate the ear canal.

Earplugs are available in various shapes and sizes, and can even be customized to fit each person’s ear canals.

Earmuffs fit over the outer ear to form an air seal so the entire ear canal is blocked, and are held in place by an adjustable band. However, earmuffs will not seal around eyeglasses or long hair.

Well-fitted earplugs or muffs can each cut the noise by 15 to 30 decibels, while using both at the same time usually adds 10 to 15 decibels more protection, and should be considered when noise exceeds 105 decibels.

Rounded pieces of cotton or tissue paper and stuffed into the ears are not good protectors as they cut the noise only by about 7 dB.

During the celebrations, stay at a distance from where the fireworks are being lit. Leave the lighting up to the elders or the experts. Also make sure that no one else is standing, particularly the children.

Sitting outside watching fireworks won’t hurt your hearing. The key is to be aware of how much time you’ve already spent exposed to the loud sounds. Keep it to a minimum. Moderation is key.

It’s also a better idea to set the fireworks out in an open field as this reduces echoes.

Hearing loss is a serious but preventable problem, doctors say. However, taking the risks for granted can cause irreparable damage. No treatment can restore your perfect hearing. Once it’s gone, it’s gone. So listen up and protect your ears.

Enjoy the season of fireworks with a measure of caution by protecting your ears.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Thu Jan 05, 2006 9:28 am    Post subject: Sound Science: Townshend Blames Headphones for Hearing Loss Reply with quote

Sound Science: Pete Townshend Blames Headphones for Hearing Loss
By Robert Roy Britt
LiveScience Managing Editor
posted: 04 January 2006
11:00 pm ET

In a widely reported story Wednesday, Rock star Pete Townshend blames his hearing loss on earphones rather than the loud concerts of his band, The Who.

But what is the science of his claim? Pretty solid.

Townshend has told of his hearing loss before but refrained from blaming headphones or speaking to the science of the situation. But having been forced to alter the style of music he writes, as well as needing to take 36-hour breaks from the noise, he felt compelled to break the silence.

"This very morning, after a night in the studio trying to crack a difficult song demo, I wake up realizing again—reminding myself, and feeling the need to remind the world—that my own particular kind of damage was caused by using earphones in the recording studio, not playing loud on stage," Townshend wrote on his web site Dec. 29.

"My ears are ringing, loudly," the guitarist wrote. "This rarely happens after a live show, unless the Who play a small club. This is a peculiar hazard of the recording studio."

The science of it

Warnings about potential hearing loss from loud concert music go back decades. Researchers in the 1980s began cautioning that the Walkman and other headphone-based music devices also packed risk.

The widespread and increasing use of headphones and the newer earbuds has been shown to induce hearing loss in young people.

An updated warning was issued just last month for the modern devices, including iPods and MP3 players.

"We're seeing the kind of hearing loss in younger people typically found in aging adults," Dean Garstecki, a Northwestern University audiologist, said in December. "Unfortunately, the earbuds preferred by music listeners are even more likely to cause hearing loss than the muff-type earphones that were associated with the older devices."

Earbuds, being inserted into rather than surrounding it, can boost sound intensity by 6 to 9 decibels, Garstecki said.

What hurts

Hearing loss is typically painless and gradual in its inception. So we don't notice it's early stages, except perhaps as a ringing in the ear known as tinnitus.

The American Hearing Research Foundation (AHRF) reports that "1-in-10 Americans has a hearing loss that affects his or her ability to understand normal speech."

The decibel (dB) scale is logarithmic, such that 40 decibels is 100 times as intense as 20 decibels. Some common sounds:

20 dB: A whisper
60 dB: Normal conversation
100 dB: Chainsaw
120 dB: Rock concert
140 dB: Jet engine
180 dB: Firecracker
Length of exposure is a crucial factor in hearing loss. A constant 100-dB sound level can cause damage after 2 hours, according to the AHRF. You don't want to experience 140 dB for even a second.

Earbud exposure

Students at Witchita State University have been found to be experiencing decibel levels of 110 to 120 during normal use of earbuds, Garstecki said.

He fears that better batteries nowadays make personal music players even more dangerous, because people can use them for long stretches. He recommends reducing the sound level and limiting use to an hour a day to stay safe.

Townshend, while not a scientist, also worries about extended use, because music is so often shared among computers in offices and kitchens, and headphones offer privacy.

"If you use an iPod or anything like it, or your child uses one, you MAY be OK," Townshend writes. "It may only be studio earphones that cause bad damage. I only have long experience of the studio side of things (though I've listened to music for pleasure on earphones for years, long before the Walkman was introduced). But my intuition tells me there is terrible trouble ahead."
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Mon Jan 09, 2006 11:16 am    Post subject: Turn Down That Radio! Years Of Loud Noise May Lead To Tumor Reply with quote

Source: Ohio State University
Date: 2006-01-09
URL: http://www.sciencedaily.com/re.....134554.htm

--------------------------------------------------------------------------------

Turn Down That Radio! Years Of Loud Noise May Lead To Tumor

New research suggests that years of repeated exposure to loud noise increases the risk of developing a non-cancerous tumor that could cause hearing loss.

“It doesn't matter if the noise comes from years of on-the-job exposure or from a source that isn't job-related,” said Colin Edwards, a doctoral student in the School of Public Health at Ohio State University.

In the current study, people who were repeatedly exposed to loud noise over the span of several years were on average one-and-a-half times as likely to develop this type of tumor compared to people who weren't exposed to such noise on a regular basis.

The tumor, called acoustic neuroma, grows slowly and symptoms typically become noticeable around age 50 or older. Of the 146 people with acoustic neuroma in this study, nearly two out of three were 50 or older.

An acoustic neuroma tumor slowly presses the cranial nerve that is responsible for sensing sound and helping with balance. Symptoms include hearing loss and a constant ringing in the ears, or tinnitus.

The study is currently in the online advance access edition of the American Journal of Epidemiology. The study will also appear in the February 15 printed edition of the same journal.

Edwards and his colleagues gathered four years of data from the Swedish portion of the INTERPHONE Study, an international study of cell phone use and tumors that affect the brain and head.

The researchers used the Swedish portion of the study because health officials there keep meticulous data on rates of acoustic neuroma development in the country's population, said Judith Schwartzbaum, a study co-author and an associate professor of epidemiology in the School of Public Health at Ohio State .

In addition to the 146 study participants with acoustic neuroma, another 564 people without the tumor who served as controls were also interviewed by a nurse. The participants in this group were randomly selected from the continuously updated Swedish population registry. Study participants ranged in age from 20 to 69.

All participants were asked if they were regularly exposed to occupational and non-occupational loud noise and, if so, for how many years. “Loud noise” was defined as at least 80 decibels – the sound of city traffic.

If the subjects said that they had been regularly exposed to loud noise, they were then asked to describe the activities during which they were exposed to that noise.

Categories for loud noise exposure included: exposure to machines, power tools and/or construction noise; exposure to motors, including airplanes; exposure to loud music, including employment in the music industry; and exposure to screaming children, sports events and/or restaurants or bars.

The researchers also collected data on the use of hearing protection.

The two types of loud noise posing the highest risk of acoustic neuroma development were exposure to machines, power tools and/or construction (1.8 times more likely to develop the tumor) and exposure to music, including employment in the music industry (2.25 times more likely to develop the tumor.)

Exposure to motors, including airplanes increased acoustic neuroma risk by 1.3 times, while regular exposure to screaming children, sports events and/or bars and restaurants increased the risk by 1.4 times.

The number of years that a person was exposed to any category of loud noise also contributed to the development of acoustic neuroma. Just five years of regular exposure to loud noise increased the chance that a person would develop acoustic neuroma by one-and-a-half times.

“It's not surprising that the longer that people are exposed to loud noise, the greater their chances become for developing the tumor,” Edwards said.

The study results also suggest the importance of wearing ear protection when exposed to loud noises. People who reported that they protected their ears from loud noise had about the same risk of developing acoustic neuroma as people who were not exposed to loud noise. People who protected their hearing were also half as likely to develop acoustic neuroma as people who didn't wear ear protection.

The tumor is fairly rare, accounting for only about 6 to 10 percent of tumors that develop inside the skull. Depending on the population, anywhere from one to 20 people per 100,000 develop acoustic neuroma each year. The people with the tumor in this study had the most common type – unilateral acoustic neuroma. About 95 percent of all cases of acoustic neuroma affect only one ear. The other kind, bilateral acoustic neuroma, is inherited and affects both ears.

If the tumor is caught early enough through a thorough examination and hearing tests, a physician may be able to surgically remove it. But as the tumor grows larger, it may become attached to the nerves that control facial movement, balance and hearing, making it far more difficult to remove the entire tumor.

Edwards and Schwartzbaum conducted the study with researchers from the Institute of Environmental Medicine of the Karolinska Institutet in Stockholm, Sweden.

Funding for this work was provided by the European Union Fifth Framework Program; the Swedish Research Council; and the International Union Against Cancer.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Mon Jan 09, 2006 11:25 am    Post subject: Hear, hear Reply with quote

Hear, Hear (from Science News for kids)

http://www.sciencenewsforkids......ature1.asp
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Fri Jan 20, 2006 1:44 pm    Post subject: Human Ears Evolved from Ancient Fish Gills Reply with quote

Human Ears Evolved from Ancient Fish Gills
By Bjorn Carey
LiveScience Staff Writer
posted: 19 January 2006
12:21 am ET

Your ability to hear relies on a structure that got its start as a gill opening in fish, a new study reveals.

Humans and other land animals have special bones in their ears that are crucial to hearing. Ancient fish used similar structures to breathe underwater.

Scientists had thought the evolutionary change occurred after animals had established themselves on land, but a new look at an old fossil suggests ear development was set into motion before any creatures crawled out of the water.

The transition

Researchers examined the ear bones of a close cousin of the first land animals, a 370-million-year-old fossil fish called Panderichthys. They compared these structures to those of another lobe-finned fish and to an early land animal and determined that Panderichthys displays a transitional form.

In the other fish, Eusthenopteron, a small bone called the hyomandibula developed a kink and obstructed the gill opening, called a spiracle.

However, in early land animals such as the tetrapod Acanthostega, this bone has receded, creating a larger cavity in what is now part of the middle ear in humans and other animals.

Missing link

The new examination of the Panderichthys fossil provides scientists with a critical "missing link" between fish gill openings and ears.

"In Panderichthys, it is much more like in tetrapods where there is no longer such a 'kink' and the spiracle has widened and opened up," study co-author Martin Brazeau of Uppsala University in Sweden told LiveScience. "[The hyomandibula] is quite a bit shorter, but still fairly rod-like like in Eusthenopteron. It's like a combination of fish and tetrapods."

However, it's unclear if early tetrapods used these structures to hear. Panderichthys most likely used their spiracles for ventilation of either water or air. Early tetrapods probably passed air through the opening. Scientists would need preserved soft tissue to say for sure.

"That's the question that we're starting to investigate, whether early tetrapods used it for some ventilation function as well," Brazeau said. Whether it was for the exhalation of water or air, it's not really clear. We can infer that it's quite expanded and improved from fish."

This research is detailed in the Jan. 19 issue of the journal Nature.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Fri Jan 27, 2006 6:26 am    Post subject: Why Screaming Doesn't Make You Deaf Reply with quote

Why Screaming Doesn't Make You Deaf
By Ker Than
LiveScience Staff Writer
posted: 26 January 2006
02:01 pm ET

As you scream for your favorite sports team, special brain cells kick in to protect your auditory system from the sound of your own voice, a new study suggests.

These cells dampen your auditory neurons' ability to detect incoming sounds. The moment you shut up, the inhibition signal stops and your hearing returns to normal, so you can then be deafened by the screams of the guy next to you.

Scientists call this signal a corollary discharge. In crickets, on which the study was done, it's sent from the motor neurons responsible for generating loud mating calls to sensory neurons involved in hearing. The signal is sent via middlemen called interneurons.

Biologists have long known that corollary discharge interneurons, or CDIs, must exist. Only in recent years, however, have they started finding them. The new cricket study is the first to pinpoint CDIs for the auditory system.

Listen to me

Animals generate sounds to communicate, to attract mates, and to ward off rivals. Some animals, like dolphins and bats, even hunt with sounds.

CDIs help resolve two problems that sound-generating animals have. They protect creatures from their own sounds, and they allow animals to distinguish between sounds that they've created and ones from outside sources.

"It's difficult to say whether crickets can distinguish between self-generated and external sounds, but a similar mechanism in humans might explain how we can recognize our own voice," study leader James Poulet from the University of Cambridge told LiveScience..

Scientists haven't yet identified CDIs in humans but imaging studies have shown that auditory areas in our brains are suppressed during speech.

More to it

In addition to CDIs, humans have a so-called "middle ear reflex" that also helps to protect our hearing from loud sounds. Two tiny muscles are attached to bones in the middle part of our ears. When we're exposed to sudden loud noises, these muscles contract and make our auditory systems less responsive to incoming sounds.

Unlike corollary discharges, the middle ear reflex dampens hearing only in response to external sounds. Also, because it is only a reflex, the response becomes less vigorous with repetition and long exposure.

CDIs are not unique to the auditory system. In monkeys, visual CDIs help keep the visual scene stable even as the eyes move around rapidly. Scientists suspect CDIs exist for other sensory systems as well, including touch.

This could help explain why we can't tickle ourselves.

"The corollary discharge is not present when someone else tickles us," Poulet explained. "Therefore the sensory response in the brain is much greater and the tickle appears much more ticklish."

Another recent study found that the brain can anticipate your effort to tickle yourself, and it discounts the sensation.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Mon Jan 30, 2006 6:55 am    Post subject: Is Your Earwax Wet or Dry? Reply with quote

Is Your Earwax Wet or Dry?
By Bjorn Carey
LiveScience Staff Writer
posted: 29 January 2006
05:18 pm ET

Do you have dry, flaky earwax or the gooey, stinky type? The answer is partly in your heritage.

A new study reveals that the gene responsible for the drier type originated in an ancient Northeastern Asian population.

Today, 80 to 95 percent of East Asians have dry earwax, whereas the wet variety is abundant in people of African and European ancestry (97 to 100 percent).

Populations in Southern Asia, the Pacific Islands, Central Asia, Asia Minor, and Native North Americans and Inuit of Asian ancestry, fall in the middle with dry wax frequencies ranging from 30 to 50 percent.

Researchers identified a gene that alters the shape of a channel that controls the flow of molecules that directly affect earwax type. They found that many East Asians have a mutation in this gene that prevents cerumen, the molecule that makes earwax wet, from entering the mix.

Scientists believe that the mutation reached high frequencies in Northeast Eurasia and, following a population increase, expanded over the rest of the continent. Today distribution of the gene is highest in North China and Korea.

Wet earwax is believed to have uses in insect trapping, self-cleaning, and prevention of dryness in the external auditory canal of the ear. It also produces an odor and causes sweating, which may play a role as a pheromone.

The usefulness of dry earwax, however, is not well understood. Researchers believe it may have originated to prevent less odor and sweating, a possible adaptation to the cold climate that the population is believed to have lived in.

The research is detailed in the Jan. 29 online edition of the journal Nature Genetics.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Tue Jun 13, 2006 4:38 pm    Post subject: Study Reveals Who Hears Best Reply with quote

Study Reveals Who Hears Best
By LiveScience Staff

posted: 13 June 2006
02:45 pm ET

The nation's hearing hasn't changed all that much from 35 years ago, despite significant changes in society and technology.

A new study also revealed that non-Hispanic blacks have better hearing on average compared to non-Hispanic whites and Hispanic adults in the United States, and that women tend to have better hearing than men.

The findings, announced this week, were first presented last week at a meeting of the Acoustical Society of America.The findings, announced this week, were presented last week at the Acoustical Society of America's spring meeting.

Go ahead: Speak softly

Researchers at the National Institute for Occupational Safety and Health (NIOSH) in Cincinnati studied the hearing of more than 5,000 U.S. adults aged 20 to 69. The participants identified themselves as members of one of three major U.S. ethnic groups.

Non-Hispanic blacks have on average the lowest "hearing thresholds," which is the softest sound an individual can hear over a range of frequencies. Non-Hispanic whites had the highest, and Mexican Americans were in-between. In all the groups, women had more sensitive hearing than men.

The researchers compared the new findings to a similar study conducted 35 years ago and found the median hearing levels in U.S. adults to be roughly the same. This might seem surprising, considering the greater number of noise sources today. Sometimes, even hospitals sometimes are as noisy as a jackhammer, according to recent research.

What's going on?

The researchers speculate that one potential factor for the similarity over time is the widespread use of hearing protection today that was not available in the early 1970s, the researchers say. Another possibility is that fewer U.S. residents are working in noisy factory jobs.

However, the researchers note that the ubiquitous effects of portable music players like the iPod are not fully accounted for in the new study, since data from only the years 1999 to 2004 were analyzed.

Numerous factors can contribute to hearing loss, but it is estimated that at least one-third of all cases are due to overexposure to noise.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Mon Feb 05, 2007 6:36 pm    Post subject: Scientists identify molecular cause for one form of deafness Reply with quote

University of Illinois at Urbana-Champaign
5 February 2007

Scientists identify molecular cause for one form of deafness

CHAMPAIGN, Ill. — Scientists exploring the physics of hearing have found an underlying molecular cause for one form of deafness, and a conceptual connection between deafness and the organization of liquid crystals, which are used in flat-panel displays.

Within the cochlea of the inner ear, sound waves cause the basilar membrane to vibrate. These vibrations stimulate hair cells, which then trigger nerve impulses that are transmitted to the brain.

Researchers have now learned that mutations in a protein called espin can cause floppiness in tiny bundles of protein filaments within the hair cells, impairing the passage of vibrations and resulting in deafness.

Filamentous actin (F-actin) is a rod-like protein that provides structural framework in living cells. F-actin is organized into bundles by espin, a linker protein found in sensory cells, including cochlear hair cells. Genetic mutations in espin's F-actin binding sites are linked to deafness in mice and humans.

"We found the structure of the bundles changes dramatically when normal espin is replaced with espin mutants that cause deafness," said Gerard Wong, a professor of materials science and engineering, of physics, and of bioengineering at the University of Illinois at Urbana-Champaign.

"The interior structure of the bundles changes from a rigid, hexagonal array of uniformly twisted filaments, to a liquid crystalline arrangement of filaments," Wong said. "Because the new organization causes the bundles to be more than a thousand times floppier, they cannot respond to sound in the same way. The rigidity of these bundles is essential for hearing."

Wong and his co-authors – Illinois postdoctoral research associate Kirstin Purdy and Northwestern University professor of cell and molecular biology James R. Bartles – report their findings in a paper accepted for publication in the journal Physical Review Letters, and posted on its Web site.

High-resolution X-ray diffraction experiments, performed by Purdy at the Advanced Photon Source and at the Stanford Synchrotron Radiation Laboratory, allowed the researchers to solve the structure of various espin-actin bundles.

"As the ability of espin to cross-link F-actin is decreased by using genetically modified 'deafness' mutants with progressively more damaged actin binding sites, the structure changes from a well-ordered crystalline array of filaments to a nematic, liquid crystal-like state," said Wong, who also is a researcher at the Frederick Seitz Materials Research Laboratory on campus and at the university's Beckman Institute for Advanced Science and Technology.

In the liquid crystalline state, the bundles maintain their orientation order – that is, they point roughly along the same direction – but lose their positional order. These nematic liquid crystals are commonly used in watch displays and laptop displays.

Wong and his colleagues also found that a mixture of mutant espin and normal espin would prevent the structural transition from occurring. If gene expression could turn on the production of just a fraction of normal espin linkers, a kind of rescue attempt at restoring hearing could, in principle, be made.

"We have identified the underlying molecular cause for one form of deafness, and we have identified a mechanism to potentially 'rescue' this particular kind of pathology," Wong said. "Even so, this is really the first step. This work has relevance to not just human hearing, but also to artificial sensors."


###
The U.S. Department of Energy, National Institutes of Health and National Science Foundation funded the work.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Tue Feb 13, 2007 8:09 am    Post subject: Study looks at benefits of 2 cochlear implants in deaf child Reply with quote

University of Wisconsin-Madison
13 February 2007

Study looks at benefits of 2 cochlear implants in deaf children

MADISON -- Nature has outfitted us with a pair of ears for good reason: having two ears enhances hearing. University of Wisconsin-Madison scientists are now examining whether this is also true for the growing numbers of deaf children who've received not one, but two, cochlear implants to help them hear.

Led by Ruth Litovsky, an investigator in the UW-Madison Waisman Center, the team's research suggests that deaf children who have a cochlear implant in each ear more accurately locate sounds when they use both implants instead of one. Children with two implants also become more skilled at localizing sound over time.

The results were presented today (Feb. 13) at the Annual Midwinter Meeting of the Association for Research in Otolaryngology.

Information like this can be useful, says Litovsky, when doctors and parents are deciding whether a child should get one or two of the electronic devices, which allow deaf people to hear by bypassing the damaged inner ear, or cochlea, to stimulate the auditory nerve directly.

It's not a simple choice. A single implant and the required surgery can cost $50,000. The device also permanently damages the cochlea, which might prevent recipients from taking advantage of potentially superior treatments for deafness down the road.

Patients never received more than one implant until about ten years ago. Then, doctors began to fit people with two, hoping this would assist them in understanding speech, especially in "cocktail party" environments with lots of competing sounds. "But there are still many remaining questions about the actual extent of the benefits of having two cochlear implants," Litovsky says.

Only about three percent of the 100,000 people worldwide who currently wear implants have received two, she estimates.

Litovsky is an expert in binaural hearing, or hearing with two ears. "We try to understand how having two ears is helpful," she says. One main benefit: two ears make it easier to locate sounds. "If you close an ear, walk around and try to identify where sounds are coming from, it's very, very hard," she says.

To test whether a pair of cochlear implants aids this ability, Litovsky's team has, to date, studied 55 deaf children who received a second implant one to seven years after being fitted with their first.

When the research began, it appeared the group of 5 to 14 year-olds couldn't localize sounds at all, Litovsky says. The result prompted her to launch a longitudinal study designed not only to test their prowess at this task, but also how it changed over time.

In the "listening game" she has devised with her team, children face a semicircle of loudspeakers arranged at regular intervals, each with a picture attached. When speech or other kinds of sounds emit from a speaker, the children are scored on their ability to identify the correct one by pointing to its picture.

In addition to completing the task while wearing both implants, the children were asked to remove the microphone and other external parts of one, rendering them deaf again in that ear.

"That turns out to be an interesting experience, because they don't like to remove an implant," says Litovsky. "We have to barter for that, with M&Ms or something else that motivates them."

Although variability existed among the children, the study indicates that most did develop the ability to locate speech and other sounds more accurately when using two cochlear implants versus one. This capability also increased with experience. "We're now seeing that the ability to localize sounds takes time to emerge," says Litovsky. "What seems to get better is the integration of the information from the two ears in the brain."

Another crucial question is whether children should receive both implants simultaneously, at the same time, or sequentially, at different times, she says. The study's results have implications here, as well.

"The children we're looking at received their implants sequentially," says Litovsky, "and we think that their brains took a very long time to combine the inputs from the two ears." Yet, the fact they learned to do so points to the brain's adaptability, or "plasticity," she adds. "It reveals that the brain is still open to input from an ear that was deaf for a very long time."

Litovsky emphasizes that her goal is not to tell parents or doctors whether two implants are better for children, but to work with families who have made that choice and study the outcomes.

"I think so far our work has helped inform clinicians about these decisions," she says. "So I hope in the future we'll be able to continue to do that." Litovsky's research is funded by the National Institute on Deafness and Other Communication Disorders.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Thu Feb 15, 2007 7:49 am    Post subject: Low-pitch treatment alleviates ringing sound of tinnitus Reply with quote

Low-pitch treatment alleviates ringing sound of tinnitus

University of California, Irvine

UCI researchers find novel approach for hearing therapy

Irvine, Calif., February 14, 2007
For those who pumped up the volume one too many times, UC Irvine researchers may have found a treatment for the hearing damage loud music can cause.

Fan-Gang Zeng and colleagues have identified an effective way to treat the symptoms of tinnitus, a form of hearing damage typically marked by high-pitched ringing that torments more than 60 million Americans. A low-pitched sound, the researchers discovered, applied by a simple MP3 player suppressed and provided temporary relief from the high-pitch ringing tone associated with the disorder.

Tinnitus is caused by injury, infection or the repeated bombast of loud sound, and can appear in one or both ears. It’s no coincidence that many rock musicians, and their fans, suffer from it. Although known for its high-pitched ringing, tinnitus is an internal noise that varies in its pitch and frequency. Some treatments exist, but none are consistently effective.

Zeng presented his study Feb. 13 at the Middle Winter Research Conference for Otolaryngology in Denver.

“Tinnitus is one of the most common hearing disorders in the world, but very little is understood about why it occurs or how to treat it,” said Zeng, a professor of otolaryngology, biomedical engineering, cognitive sciences, and anatomy and neurobiology. “We are very pleased and surprised by the success of this therapy, and hopefully with further testing it will provide needed relief to the millions who suffer from tinnitus.”

As director of the speech and hearing lab at UCI, Zeng and his team made their discovery while addressing the severe tinnitus of a research subject. The patient uses a cochlear implant to address a constant mid-ranged pitched sound in his injured right ear accented by the periodic piercing of a high-pitched ringing sound ranging between 4,000 and 8,000 hertz in frequency.

At first, Zeng thought of treating the tinnitus with a high-pitched sound, a method called masking that is sometimes used in tinnitus therapy attempts. But he ruled out that option because of the severity of the patient’s tinnitus, so an opposite approach was explored, which provided unexpectedly effective results.

After making many adjustments, the researchers created a low-pitched, pulsing sound – described as a “calming, pleasant tone” of 40 to 100 hertz of frequency – which, when applied to the patient through a regular MP3 player, suppressed the high-pitched ringing after about 90 seconds and provided what the patient described as a high-level of continued relief.

Zeng’s patient programs the low-pitched sound through his cochlear implant, and Zeng is currently studying how to apply this treatment for people who do not use any hearing-aid devices. Since a cochlear implant replaces the damaged mechanism in the ear that stimulates the auditory nerve, Zeng believes that a properly pitched acoustic sound will have the same effect on tinnitus for someone who does not use a hearing device. Dr. Hamid Djalilian, a UCI physician who treats hearing disorders, points out that a custom sound can be created for the patients, who then can download it into their personal MP3 player and use it when they need relief.

“The treatment, though, does not represent a cure,” Zeng said. “This low-pitch therapeutic approach is only effective while being applied to the ear, after which the ringing can return. But it underscores the need to customize stimulation for tinnitus suppression and suggests that balanced stimulation, rather than masking, is the brain mechanism underlying this surprising finding.”

Qing Tang, Jeff Carroll, Andrew Dimitrijevic and Dr. Arnold Starr of UCI; Leonid Litvak of Advanced Bionics Corp.; and Jannine Larkin and Dr. Nikolas H. Blevins at Stanford University participated in the study, which was supported by the National Institutes of Health.


About the University of California, Irvine: The University of California, Irvine is a top-ranked university dedicated to research, scholarship and community service. Founded in 1965, UCI is among the fastest-growing University of California campuses, with more than 25,000 undergraduate and graduate students and about 1,800 faculty members. The second-largest employer in dynamic Orange County, UCI contributes an annual economic impact of $3.7 billion. For more UCI news, visit www.today.uci.edu

Television: UCI has a broadcast studio available for live or taped interviews. For more information, visit www.today.uci.edu/broadcast

News Radio: UCI maintains on campus an ISDN line for conducting interviews with its faculty and experts. The use of this line is available free-of-charge to radio news programs/stations who wish to interview UCI faculty and experts. Use of the ISDN line is subject to availability and approval by the university.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Wed Mar 28, 2007 10:23 am    Post subject: Nutrients might prevent hearing loss Reply with quote

University of Michigan Health System
28 March 2007

Nutrients might prevent hearing loss, new animal study suggests
Antioxidant-mineral combination protects against damage for days after noise exposure, U-M study in guinea pigs shows



ANN ARBOR, Mich. -- Soldiers exposed to the deafening din of battle have little defense against hearing loss, and are often reluctant to wear protective gear like ear plugs that could make them less able to react to danger. But what if a nutritious daily "candy bar" could prevent much of that potential damage to their hearing?

In a new study in animals, University of Michigan researchers report that a combination of high doses of vitamins A, C, and E and magnesium, taken one hour before noise exposure and continued as a once-daily treatment for five days, was very effective at preventing permanent noise-induced hearing loss. The animals had prolonged exposure to sounds as loud as a jet engine at take-off at close range.

Clinical trials of a hearing-protection tablet or snack bar for people could begin soon, and if successful such a product could be available in as little as two years, says Josef M. Miller, Ph.D., the senior author of the study, which is published online in the journal Free Radical Biology and Medicine. Miller is a professor in the Department of Otolaryngology at the U-M Medical School, and former director of the U-M Health System’s Kresge Hearing Research Institute, where the study was performed.

Convinced by emerging evidence that nutrients can effectively block one major factor in hearing loss after noise trauma — inner ear damage caused by excessive free radical activity — Miller has launched a U-M startup company OtoMedicine that is developing the vitamin-and-magnesium formulation.

"These agents have been used for many years, but not for hearing loss. We know they’re safe, so that opens the door to push ahead with clinical trials with confidence we’re not going to do any harm," says Miller.

The formulation the researchers used built on earlier animal studies showing that single antioxidant vitamins were somewhat effective in preventing hearing loss, and on studies of Israeli soldiers given magnesium many days prior to exposure, who gained relatively small protective effects.

In the U-M study, noise-induced hearing loss was measured in four groups of guinea pigs treated with the antioxidant vitamins A, C and E, magnesium alone, an ACE-magnesium combination, or a placebo. The treatments began one hour before a five-hour exposure to 120 decibel (dB) sound pressure level noise, and continued once daily for five days.

The group given the combined treatments of vitamins A, C and E and magnesium showed significantly less noise-induced hearing loss than all of the other groups.

"Vitamins A, C and E and magnesium worked in synergy to prevent cell damage," explains Colleen G. Le Prell, Ph.D., the study’s lead author and a research investigator at the U-M Kresge Hearing Research Institute. According to the researchers, pre-treatment presumably reduced reactive elements called free radicals that form during and after noise exposure and noise-induced constriction of blood flow to the inner ear, and may have also reduced neural excitotoxicity, or the damage to auditory neurons that can occur due to over-stimulation. The post-noise nutrient doses apparently "scavenged" free radicals that continue to form long-after after this noise exposure ends.

In the past 10 years, scientists have learned that noise-induced hearing loss occurs in part because cell mitochondria in the ear churn out damaging free radicals in response to loud sounds. "Free radical formation bursts initially, then peaks again during the days after exposure," explains Le Prell.

The antioxidant vitamins and magnesium used in the study are widely used dietary supplements, not new drugs, and therefore they don’t require the extensive safety tests required for new drug entities prior to use in clinical trials. The doses to be used in proposed human trials will be within the ranges considered safe according to the Institute of Medicine and federal nutrition guidelines.

"Ultimately, we envision soldiers would have a nutritional bar with meals and it would give them adequate daily protection," says Miller. Similar bars with other formulations are already given to soldiers to help them withstand hot weather and other war zone conditions.

"Other people would likely benefit by consuming a pill or nutritional bar before going to work in noisy environments, or attending noisy events like NASCAR races or rock concerts, or even using an iPod or other music player," says Le Prell. "Based on an earlier study with other antioxidant agents, we think this micronutrient combination will work even post-noise."

That study suggested a "morning after" treatment, that might minimize hearing damage for soldiers, musicians, pilots, construction workers and others — even if they don’t take it until after they experience dangerous noise levels. It was highlighted by the National Institutes of Health on the NIDCD website at www.nidcd.nih.gov/research/sto.....01_06.asp.

If effective, such pre- and post-noise treatments could have far-reaching effects. About 30 million Americans regularly experience hazardous noise levels at work and at home, according to the National Institute on Deafness and Communications Disorders. Hunting, snowmobiling, using machines such as leaf blowers, lawnmowers and power tools, and attending or playing in loud music concerts commonly expose people to dangerous noise levels. Noise levels above 85 decibels damage hearing. About 28 million Americans have some degree of hearing loss. For about a third of them, noise accounts at least in part for their loss.

The U-M study also adds strength to research efforts under way in many research centers to learn how these nutrients might be used to treat many illnesses. "Similar combinations have been very effective in preventing macular degeneration, and many of these agents have been used with Alzheimer’s and Parkinson’s diseases, stroke-like ischemia, and other conditions that involve neural degeneration," Le Prell says. "You’re always hoping as a basic scientist to find a commonality like that, across other disease processes," says Miller.

###
U-M has applied for patents covering the use of this unique combination of vitamins and minerals in the prevention of hearing loss, as demonstrated in this study; if and when revenues are generated as a result of these commercialization efforts, the University and the inventors of the technology stand to benefit financially. An additional author of the study is Larry F. Hughes, Dept. of Surgery/Otolaryngology, Southern Illinois University Medical School. The study was supported with funds from the National Institutes of Health, General Motors Corporation/United Automotive Workers Union, and the Ruth and Lynn Townsend Professorship in Communication Disorders. Reference: Free Radical Biology & Medicine, 42 (2007) 1454–1463
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Wed May 16, 2007 9:06 am    Post subject: Some children are born with 'temporary deafness' and do not Reply with quote

University of Haifa
16 May 2007

Some children are born with 'temporary deafness' and do not require cochlear implant

Clinical research conducted in the Department of Communication Disorders at the University of Haifa revealed that some children who are born deaf "recover" from their deafness and do not require surgical intervention. To date, most babies who are born deaf are referred for a cochlear implant. "Many parents will say to me: 'My child hears; if I call him, he responds'. Nobody listens to them because diagnostic medical equipment did not register any hearing. It seems that these parents are smarter than our equipment," said Prof. Joseph Attias, a neurophysiologist and audiologist in the Department of Communication Disorders at the University of Haifa, who made the discovery.

There are two causes of congenital deafness among children. One is the lack of hair cells, receptors in the inner ear that convert sounds into pulse signals that activate the auditory nerve. The second cause is a malfunction of the nerves. A child may be born with what appears to be a normal inner ear, but the hair cells do not "communicate" with the auditory nerves and the child cannot hear. To date, doctors have recommended the same treatment for all children born deaf. Once a child has been diagnosed as deaf, doctors recommend a cochlear implant, a surgically- implanted electronic device that bypasses the hair cells and directly stimulates the auditory nerve. Prof. Attias stresses that a cochlear implant is an excellent treatment for children with congenital deafness whose hearing does not improve over time. However, it appears that some children are born with "temporary deafness" – a condition previously unidentified.

This discovery, like other revolutionary discoveries, was made by chance. A child who was born with malfunctioning hair cells and was scheduled for a cochlear implant was referred to Prof. Attias for a pre-surgical evaluation. The evaluation found that the child's brain and auditory nerves exhibited beginning responses to sound stimuli. The surgery was postponed. Follow-up visits showed increasing function of the hair cells and eventually the child reached a state of normal hearing. Prof. Attias, who is part of a cochlear implant team at Schneider Children's Medical Center, looked in the department archives and found other, similar cases. "Because these children go through a series of tests and evaluations by different doctors, a process that often takes months, there are cases of children who were initially referred for the procedure who didn't have it done. Sometimes parents decide not to do the surgery; sometimes they do it elsewhere. I called parents and found another seven cases of children who were diagnosed as deaf, did not have the procedure done, and began to hear," said Prof. Attias.

Prof. Attias then found another five children who had been referred to him for pre-operative testing who had begun to hear. At the end of his clinical research, he identified a "window of opportunity" of 17 months during which deaf children may begin to hear. "A child whose deafness is caused by a malfunctioning connection between hair cells and the auditory nerve should not have a cochlear implant in the first 17 months of life. Research results show the possibility that at least some of these children undergo the procedure for nothing," explained Prof. Attias.

He added that some of the children only develop partial hearing, which can be augmented with external hearing aids. Prof. Attias is now researching "temporary deafness" among young children, looking to find a way to identify those who will recover and those who will not.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Fri Jun 08, 2007 9:46 am    Post subject: A wider range of sounds for the deaf Reply with quote

University of Michigan Health System
8 June 2007

A wider range of sounds for the deaf

Tiny array placed in auditory nerve may one day offer superior alternative to cochlear implants, animal study suggests
ANN ARBOR, Mich. -- More than three decades ago, scientists pursued the then-radical idea of implanting tiny electronic hearing devices in the inner ear to help profoundly deaf people. An even bolder alternative that promised superior results — implanting a device directly in the auditory nerve — was set aside as too difficult, given the technology of the day.

Now, however, scientists have shown in animals that it’s possible to implant a tiny, ultra-thin electrode array in the auditory nerve that can successfully transmit a wide range of sounds to the brain. The studies took place at the University of Michigan Kresge Hearing Research Institute.

If the idea pans out in further animal and human studies, profoundly and severely deaf people would have another option that could allow them to hear low-pitched sounds common in speech, converse in a noisy room, identify high and low voices, and appreciate music — areas where cochlea implants, though a boon, have significant limitations.

“In nearly every measure, these work better than cochlear implants,” says U-M researcher John C. Middlebrooks. He led a study requested by the National Institutes of Health to re-evaluate the potential of auditory nerve implants. Middlebrooks is a U-M Medical School professor of otolaryngology and biomedical engineering. He collaborated with Russell L. Snyder of the University of California, San Francisco and Utah State University. The two co-authored an article on the results in the June issue of Journal of the Association for Research in Otolaryngology.

The possible auditory nerve implants likely would be suitable for the same people who are candidates today for cochlear implants: the profoundly deaf, who can’t hear at all, and the severely deaf, whose hearing ability is greatly reduced. Also, the animal studies suggest that implantation of the devices has little impact on normal hearing, offering the possibility of restoring sensitivity to high frequencies while preserving remaining low-frequency hearing.

Middlebrooks says it’s possible that the low power requirements of the auditory nerve implants might lead to development of totally implantable devices. That would be an improvement over the external speech processor and battery pack cochlear implant users need to wear and often have to recharge daily.

If the initial success in animals is borne out in further tests, a human auditory nerve implant is probably five to 10 years away, he says.

The researchers used cats bred for laboratory use in their experiments. They measured brain processing of auditory signals in normal conditions, then compared deaf animals’ brain responses to sounds using cochlear implants and then the direct auditory nerve implants. These measurements employed neuron -monitoring technology developed earlier at U-M. The scientists found their sensitive 16-electrode microarray resulted in several advantages over cochlear implants.

Approved by the Food and Drug Administration in 1984, cochlear implants have greatly benefited profoundly and severely deaf people. More than 100,000 implants have been performed worldwide in the last two decades, including more than 1,000 at U-M.

Like the new device, cochlear implants are small electrode arrays that receive signals from an external sound processor... They are designed to stimulate the auditory nerve and other cells to produce a sensation of hearing. But their location, separated from auditory nerve fibers by fluid and a bony wall, is a limitation.

“Access to specific nerve fibers is blunted,” Middlebrooks says. “The effect is rather like talking to someone through a closed door.”

With the new intraneural stimulation procedure, that effect is eliminated, and there are other technical advantages, too. “The intimate contact of the array with the nerve fibers achieves more precise activation of fibers signaling specific frequencies, reduced electrical current requirements and dramatically reduced interference among electrodes when they are stimulated simultaneously,” Middlebrooks says.

Middlebrooks has talked with U-M surgeons in otolaryngology about surgical approaches in humans, and is working with U-M biomedical engineers on an intraneural device that can remain in place and be tested further in animals over the next two years. The devices need to be studied over time to see if they are safely tolerated by the auditory nerve.

“If our work continues to go very well, we might begin human trials in no less than five years,” Middlebrooks says.

Such a device might be used first in people whose cochleas are filled with bone and therefore aren’t eligible for a cochlear implant, or people whose cochlear implants are no longer effective.

The University of Michigan has submitted a patent application for the procedure. Through its Office of Technology Transfer, it is seeking a commercialization partner to assist in bringing the technology to market.

###
Funding for the study came from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health.

Web resources: www.nidcd.nih.gov/health

Journal citation: “Auditory Prosthesis with a Penetrating Nerve Array,” Journal of the Association for Research in Otolaryngology, Volume 8, Number 2 / June, 2007; 10.1007/s10162-007-0070-2 (DOI)
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Thu Jun 14, 2007 9:42 am    Post subject: UVa researchers restore genes in human inner ear cells Reply with quote

University of Virginia Health System
14 June 2007

UVa researchers restore genes in human inner ear cells

CHARLOTTESVILLE, Va. --Researchers at the University of Virginia Health System have discovered a way to transfer genes, which they hope will restore hearing, into diseased tissue of the human inner ear. This important step brings scientists closer to curing genetic or acquired hearing loss. Their discovery will appear Thursday, June 14, in the online issue of the scientific journal, Gene Therapy.

Dr. Jeffrey Holt, associate professor of neuroscience and otolaryngology at UVa, and his research team, including Dr. Bradley Kesser, an assistant professor of otolaryngology, targeted a gene known as KCNQ4, which causes genetic hearing loss in humans when mutated. They engineered a correct form of the gene and created a gene therapy delivery system that successfully transferred the KCNQ4 gene into human hair cells harvested from the inner ears of patients with hearing loss.

“Our results show that gene therapy reagents are effective in human inner ear tissue. Taken together with the results from another group of scientists who showed that similar gene therapy compounds can produce new hair cells and restore hearing function in guinea pigs suggest that the future of gene therapy in the human inner ear is sound,” Holt said.

Hair cells have hair-like projections that line the cochlea. In people with normal hearing, hair cells convert sound into electrical signals, which are ultimately transmitted to the brain. People with hearing loss suffer from too few, damaged or missing hair cells. Holt’s past research uncovered the speed at which hair cells develop in mouse embryos, a finding necessary to help researchers learn how to regenerate hair cells. With this current development, Holt and his team could one day restore the hearing process in damaged hair cells.

“This is a critically important step forward. We hope this breakthrough will propel the field of hearing and deafness research toward our collective goal of curing genetic and acquired deafness,” Holt said.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Sun Jun 17, 2007 10:55 am    Post subject: Gene responsible for common hearing loss identified for firs Reply with quote

European Society of Human Genetics
16 June 2007

Gene responsible for common hearing loss identified for first time

A gene responsible for the single most common cause of hearing loss among white adults, otosclerosis, has been identified for the first time, a scientist told the annual conference of the European Society of Human Genetics in Nice, France. Ms Melissa Thys, from the Department of Medical Genetics, University of Antwerp, Belgium, said that this finding may be a step towards new treatments for otosclerosis, which affects approximately 1 in 250 people.

Otosclerosis is a multifactorial disease, caused by an interaction of genetic and environmental factors. The outcome is a progressive hearing loss as the growing bone in the middle ear interrupts the sound waves passing to the inner ear. While the causative factors remain unknown, now one of the genetic components has been identified, Ms Thys told the conference. “The gene in which the variant is located points to a pathway that contributes to the disease. This may be a lead for better forms of treatment in the future; currently the best option is an operation. However, there is often an additional component of hearing loss which can’t be restored by surgery. As the gene involved is a growth factor, and the disease manifests itself by the abnormal growth of bone in the middle ear, it may have a large potential for therapy”, she said. Improved understanding may also lead to prevention strategies.

Ms Thys and her team decided to study a gene called TGBF1 which they already knew had non-genetic indications of involvement in otosclerosis: it plays a role during embryonic development of the ear and is expressed in otosclerotic bone. They used SNP (single nucleotide polymorphism) analysis, or looking at DNA sequence variations occurring in a single nucleotide, A, T, C or G, to study a large patient and control population from Belgium and The Netherlands. They found significant results for an amino acid changing SNP inTGBF1, and that this remained significant after correcting for multiple testing. Analysis of a large French group showed the same association.

“Combining the data from both groups with a common odds ratio gave a very significant result, from which we were able to conclude that we were the first to identify a gene that influences the susceptibility for otosclerosis”, said Ms Thys. “And, as further evidence, we were also able to show that a more active variant of this gene is protective against the disease.”
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Tue Jul 17, 2007 7:38 am    Post subject: Ability to listen to 2 things at once is largely inherited, Reply with quote

NIH/National Institute on Deafness and Other Communication Disorders
17 July 2007

Ability to listen to 2 things at once is largely inherited, says twin study

Your ability to listen to a phone message in one ear while a friend is talking into your other ear—and comprehend what both are saying—is an important communication skill that’s heavily influenced by your genes, say researchers of the National Institute on Deafness and Other Communication Disorders (NIDCD), one of the National Institutes of Health. The finding, published in the August 2007 issue of Human Genetics, may help researchers better understand a broad and complex group of disorders—called auditory processing disorders (APDs)—in which individuals with otherwise normal hearing ability have trouble making sense of the sounds around them.

“Our auditory system doesn’t end with our ears,” says James F. Battey, Jr., M.D., Ph.D., director of the NIDCD. “It also includes the part of our brain that helps us interpret the sounds we hear. This is the first study to show that people vary widely in their ability to process what they hear, and these differences are due largely to heredity.”

The term “auditory processing” refers to functions performed primarily by the brain that help a listener interpret sounds. Among other things, auditory processing enables us to tell the direction a sound is coming from, the timing and sequence of a sound, and whether a sound is a voice we need to listen to or background noise we should ignore. Most people don’t even realize they possess these skills, much less how adept they are at them. Auditory processing skills play a role in a child’s language acquisition and learning abilities, although the extent of that relationship is not well understood.

To determine if auditory processing skills are hereditary, NIDCD researchers studied identical and fraternal twins who attended a national twins festival in Twinsburg, OH, during the years 2002 through 2005. A total of 194 same-sex pairs of twins participated in the study (138 identical pairs and 56 fraternal pairs), representing ages 12 through 50. All twins received a DNA test to confirm whether they were identical or fraternal and a hearing test to make sure they had normal hearing.

If a trait is purely genetic, identical twins, who share the same DNA, will be alike nearly 100 percent of the time, while fraternal twins, who share roughly half of their DNA, will be less similar. Conversely, if a trait is primarily due to a person’s environment, both identical and fraternal twins should have roughly the same degree of similarity, since most twins grow up in the same household.

The volunteers took five tests that are frequently used to identify auditory processing difficulties in children and adults. In three of the tests, volunteers listened as two different one-syllable words or nonsense syllables (short word fragments such as ba, da, and ka) were played into their right and left ears simultaneously, and then tried to name both words or syllables. In two other tests, volunteers listened to digitally altered one-syllable words played into the right ear and tried to identify the word. One test artificially filtered out high-pitched sounds, which tended to obscure the consonants, while the other sped up the word.

In all but the filtered-words test, researchers found a significantly higher correlation among identical twins than fraternal twins, indicating that differences in performance for those activities had a strong genetic component. Participants showed the widest range of abilities on those tests in which they were asked to identify competing words or nonsense syllables entering each ear—called dichotic listening ability. The tests in which different one-syllable words were played simultaneously into each ear showed the widest degree of variation as well as the highest correlation among twins, especially identical twins. As much as 73 percent of the variation in dichotic listening ability was due to genetic differences, a magnitude that is comparable to well-known inherited traits such as type 1 diabetes and height. Conversely, the ability to understand the filtered words showed high correlation among all twins, indicating that variation in that skill is primarily due to differences in environment.

Scientists believe that problems with dichotic listening ability are often due to a lesion or disconnect between the brain's right and left hemispheres. When we listen to someone talking, speech entering the right ear travels in large part to the left side of the brain, where language is processed. Speech entering the left ear travels first to the right side of the brain before crossing to the brain’s language center on the left side by way of the corpus callosum, a pathway connecting the brain's right and left hemispheres.

Today’s finding that normal twins show such wide variation in their dichotic listening abilities, and that the differences are mostly due to genetic variation, adds a new perspective to our understanding of auditory processing disorders. These disorders may affect as many as seven percent of school-aged children in the United States and often appear alongside language and learning disorders, including dyslexia. APDs also affect older adults and stroke victims and can limit the successfulness of hearing aids in the treatment of hearing loss. The researchers suggest that scientists may be able to fine-tune their understanding of what an APD is and the role these disorders play in the development of language and learning disorders.

###
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Sun Jul 29, 2007 3:28 pm    Post subject: St. Jude study solves mystery of mammalian ears Reply with quote

St. Jude Children's Research Hospital
27 July 2007

St. Jude study solves mystery of mammalian ears

Protein motor in cochlea hair cells dominates the process of sound amplification in the mammalian ear, while movement of the cilia atop those cells dominates the response in non-mammals
A 30-year scientific debate over how specialized cells in the inner ear amplify sound in mammals appears to have been settled more in favor of bouncing cell bodies rather than vibrating, hair-like cilia, according to investigators at St. Jude Children’s Research Hospital.

The finding could explain why dogs, cats, humans and other mammals have such sensitive hearing and the ability to discriminate among frequencies. The work also highlights the importance of basic hearing research in studies into the causes of deafness. A report on this work appears in the advanced online issue of “Proceedings of the National Academy of Science.”

“Our discovery helps explain the mechanics of hearing and what might be going wrong in some forms of deafness,” said Jian Zuo, Ph.D., the paper’s senior author and associate member of the St. Jude Department of Developmental Neurobiology. “There are a variety of causes for hearing loss, including side effects of chemotherapy for cancer. One strength of St. Jude is that researchers have the ability to ask some very basic questions about how the body works, and then use those answers to solve medical problems in the future.”

The long-standing argument centers around outer hair cells, which are rod-shaped cells that respond to sound waves. Located in the fluid-filled part of the inner ear called the cochlea, these outer hair cells sport tufts of hair-like cilia that project into the fluid. The presence of outer hair cells makes mammalian hearing more than 100 times better than it would be if the cells were absent.

As sound waves race into the inner ear at hundreds of miles per hour, their energy—although dissipated by the cochlear fluid—generates waves in the fluid, somewhat like the tiny waves made by a pebble thrown into a pond. This energy causes the hair cell cilia in both mammals and non-mammals to swing back and forth quickly in a steady rhythm.

In mammals, the rod-shaped body of the outer hair cell contracts and then vibrates in response to the sound waves, amplifying the sound. In a previous study, Zuo and his colleagues showed that a protein called prestin is the motor in mammalian outer hair cells triggers this contraction. And that is where the debate begins.

While both mammals and non-mammals have cilia on their outer hair cells, only mammalian outer hair cells have prestin, which drives this cellular contraction, or somatic motility. The contraction pulls the tufts of cilia downward, which maximizes the force of their vibration. In mammals, both the cilia and the cell itself vibrate. Thus far the question has been whether the cilia are the main engine of sound amplification in both mammals and non-mammals.

One group of scientists believes that somatic motility in mammalian outer hair cells is simply a way to change the height of the cilia in the fluid to maximize the force with which the cilia oscillate. That, in turn, would amplify the sound. An opposing group of scientists maintains that although the vibration of the outer hair cell body itself—somatic motility—does maximize the vibration of the cilia, the cell body works independently of its cilia. That is, vibration of the mammalian cell dominates the work of amplifying sound in mammals.

“If somatic motility is the dominant force for amplifying sound in mammals, this would mean that prestin is the reason mammals amplify sound so efficiently,” Zuo said.

In the current study, Zuo and his team conducted a complex series of studies that showed in mammals that the role of somatic mobility driven by prestin is not simply to modify the response of the outer hair cells’ cilia to incoming sound waves in the cochlea fluid. Instead, somatic motility itself appears to dominate the amplification process in the mammalian cochlea, while the cilia dominate amplification in non-mammals.

Zuo’s team took advantage of a previously discovered mutated form of prestin that does not make the outer hair cells contract in response to incoming sound waves as normal prestin does. Instead, the mutated form of prestin makes the cell extend itself when it vibrates.

The St. Jude researchers reasoned that if altering the position of the cilia in the fluid changes the ability of the cilia to amplify sound, then hearing should be affected when the mutant prestin made the cell extend itself. Therefore, the team developed a line of genetically modified mice that carried only mutant prestin in their outer hair cells. The researchers then tested the animals’ responses to sound.

Results of the studies showed no alteration in hearing, which suggested that it did not matter whether the outer hair cells contracted or extended itself, that is, raised or lowered the cilia. There was no effect on amplification. The researchers concluded that somatic motility was not simply a way to make cilia do their job better; rather, there is no connection between the hair cell contractions and how the cilia do their job. Instead, somatic motility, generated by prestin, is the key to the superior hearing of mammals.


###
Other authors of this study include Jiangang Gao, Xudong Wu and Manish Patel (St. Jude); Xiang Wang, Shuping Jia and David He (Creighton University, Omaha, Neb.); Sal Aguinaga, Kristin Huynh, Keiji Matsuda, Jing Zheng, MaryAnn Cheatham and Peter Dallos (Northwestern University, Evanston, Ill.).

This work was supported in part by ALSAC, The Hugh Knowles Center and the National Institutes of Health.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Wed Sep 05, 2007 1:23 pm    Post subject: Scripps Research scientists reveal pivotal hearing structure Reply with quote

Scripps Research Institute
5 September 2007

Scripps Research scientists reveal pivotal hearing structure

In a study published in the September 6, 2007, issue of the journal Nature, researchers showed that two key proteins join together at the precise location where energy of motion is turned into electrical impulses. These proteins, cadherin 23 and protocadherin 15, are part of a complex of proteins called “tip links” that are on hair cells in the inner ear. The tip link is believed to have a central function in the conversion of physical cues into electrochemical signals.

“Mutations in [the genes] cadherin 23 and protocadherin 15 can cause deafness as well as Usher syndrome, the leading cause of deaf-blindness in humans,” says Professor Ulrich Mueller, of the Scripps Research Department of Cell Biology and Institute for Childhood and Neglected Diseases. “Age-related hearing loss in humans may also be related to problems in the tip links.”

“This team has helped solve one of the lingering mysteries of the field,” says James F. Battey, Jr., director of the National Institute on Deafness and Other Communication Disorders (NIDCD), one of the National Institutes of Health (NIH). “The better we understand the pivotal point at which a person is able to discern sound, the closer we are to developing more precise therapies for treating people with hearing loss, a condition that affects roughly 32.5 million people in the United States alone.”

The Physiology of Hearing and Deafness

Childhood and age-related hearing impairment is a major issue in our society. According to the NIDCD, one in three people older than 60 and about half of all people over 75 suffer some form of hearing loss. And about four out of every 100,000 babies born in the United States have Usher syndrome, the major cause of deaf-blindness.

Hearing is a classic example of a phenomenon called mechanotransduction, a process that is important not only for hearing, but also for a number of other bodily functions, such as the pereception of touch. It is a complicated process whereby spatial and physical cues are transduced into electrical signals that run along nerve fibers to areas in the brain where they are interpreted.

“Hearing is the least well understood of the senses,” notes Mueller.

We do know that sound starts as waves of mechanical vibrations that travel through the air from their source to a person's ear through the compression of air molecules. When these vibrational waves hit a person's outer ear, they go down the ear canal into the middle ear and strike the ear drum. The vibrating ear drum moves a set of delicate bones that communicate the vibrations to a fluid-filled spiral structure in the inner ear known as the cochlea. When sound causes these bones to move, they compress a membrane on one entrance of the cochlea and this causes the fluid inside to move accordingly.

Inside the cochlea are specialized “hair” cells that have symmetric arrays of stereocilia extending out from their surface. The movement of the fluid inside the cochlea causes the stereocilia to move. This physical change creates an electrical change and causes ion channels to open. The opening of these channels is monitored by sensory neurons surrounding the hair cells, and these neurons then communicate the electrical signals to neurons in the auditory association cortex of the brain.

In Usher syndrome and some other “sensory neuronal” diseases that cause deafness, the hair cells in the cochlea are unable to maintain the symmetric arrays of stereocilia.

A few decades ago, a molecular complex called the tip link was discovered in the stereocilia. These tip links connect the tips of stereocilia and are also thought to be important for the transmission of physical force to mechanically gated ion channels. For years, in part because stereocilia are extremely small, scarce, and difficult to handle, the molecules that made up the tip link remained elusive.

But a few years ago, Mueller and his colleagues identified one of the key proteins that formed the tip link-the protein cadherin 23. In their March 26, 2004, Nature article, Mueller and colleagues showed that the protein cadherin 23 was expressed in the right place in the hair cell to be part of the tip link, that it had the correct biochemistry, and that it seemed to be responsible for opening the ion channels. They also showed that cadherin 23 protein formed a complex with another protein called myosin 1c, which helped to close the channel once open.

“The current study provides a higher degree of resolution than the 2004 study, thanks to a collaboration with NIH Researcher Bechara Kachar and Scripps Research Professor Ron Milligan and his advanced imaging facilities,” says Mueller. “Now, we put to rest any doubts about the details of our findings.”

Three Lines of Evidence

The current study used three lines of evidence to demonstrate that cadherin 23 and protocadherin 15 unite and adhere to one another to form the tip link.

The researchers first created antibodies that would bind to and label short segments on the cadherin 23 and protocadherin 15 proteins in the inner ears of rats and guinea pigs. Using immuno-fluorescence and electron microscopy studies, they showed that cadherin 23 was located on the side of the taller stereocilium and protocadherin 15 was present on the tip of the shorter one, with their loose ends overlapping in between. The researchers were able to identify both proteins by removing an obstacle to the antibody-binding process: calcium. Under normal conditions, cadherin 23 and protocadherin 15 are studded with calcium ions, which prevent antibodies from binding to the targeted sites. When calcium was removed through the addition of a chemical known as BAPTA, both labels became visible.

Next, the researchers built a structure resembling a tip link by expressing the cadherin 23 and protocadherin 15 proteins in the laboratory and watching how they interacted. When conditions were right, the two proteins wound themselves tightly together from one end to the other in a configuration that mirrored a naturally occurring tip link. As with normal tip links, the structure thrived in calcium concentrations that paralleled those found in fluid of the inner ear, while a drastic reduction in calcium disrupted the structure.

Lastly, the scientists found that one mutation of protocadherin 15 that causes one form of deafness inhibited the interaction of the two proteins, leading them to conclude that the mutation reduces the adhesive properties of the two proteins and prevents the formation of the tip link. In a second mutation of protocadherin 15, the tip link was not destroyed; the scientists suggested that the deafness is not likely caused by the breakup of the tip link but by interference with its mechanical properties.

Knowing precisely the composition and configuration of the tip link, scientists can now explore how these proteins interact with other components to form the rest of the transduction machinery. In addition, scientists can study how new treatments might be developed to address the breaking up of tip links through environmental factors, such as loud noise.

In addition to Mueller, other authors of the study, “Cadherin 23 and protocadherin 15 interact to form tip-link filaments in sensory hair cells,” were: Piotr Kazmierczak, Elizabeth M. Wilson-Kubalek, and Ronald A. Milligan of The Scripps Research Institute, and Hirofumi Sakaguchi,, Joshua Tokita, and Bechara Kachar of the Laboratory of Cellular Biology, National Institute on Deafness and other Communication Disorders, National Institutes of Health.


###
Funding for the study was principally provided by the NIDCD. Other NIH institutes and centers that contributed funding were the National Institute of General Medical Sciences (NIGMS), the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), and the National Center for Research Resources (NCRR).

About The Scripps Research Institute

The Scripps Research Institute is one of the world's largest independent, non-profit biomedical research organizations, at the forefront of basic biomedical science that seeks to comprehend the most fundamental processes of life. Scripps Research is internationally recognized for its discoveries in immunology, molecular and cellular biology, chemistry, neurosciences, autoimmune, cardiovascular, and infectious diseases, and synthetic vaccine development. Established in its current configuration in 1961, it employs approximately 3,000 scientists, postdoctoral fellows, scientific and other technicians, doctoral degree graduate students, and administrative and technical support personnel. Scripps Research is headquartered in La Jolla, California. It also includes Scripps Florida, whose researchers focus on basic biomedical science, drug discovery, and technology development. Currently operating from temporary facilities in Jupiter, Scripps Florida will move to its permanent campus in 2009.
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Wed Sep 12, 2007 1:41 pm    Post subject: First 'Modern' Ears Found Reply with quote

First 'Modern' Ears Found
By Charles Q. Choi, Special to LiveScience

posted: 11 September 2007 08:54 pm ET

The first backboned creatures to conquer land were largely deaf, lacking anatomical features whereby tiny bones help transmit airborne sounds into the inner ear. Advanced hearing was assumed to have evolved shortly before the emergence of dinosaurs, roughly 200 million years ago.

For the full article:

http://www.livescience.com/ani.....e_ear.html
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Sat Sep 29, 2007 8:10 am    Post subject: Listen and Learn Reply with quote

Listen and Learn
Emily Sohn


Sept. 26, 2007

If you want to learn anything at school, you need to listen to your teachers. Unfortunately, millions of kids can't hear what their teachers are saying. And it's not because these students are goofing off.
Often, it's the room's fault. Faulty architecture and building design can create echo-filled classrooms that make hearing difficult.

In recent years, scientists who study sound have been urging schools to reduce background noise, which may include loud air-conditioning units and clanging pipes. They're also targeting outdoor noises, such as highway traffic.

For the full article:

http://www.sciencenewsforkids......ature1.asp
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Mon Oct 15, 2007 10:30 am    Post subject: Discovery Helps Explain How We Hear Whispers Reply with quote

Discovery Helps Explain How We Hear Whispers
By Tuan C. Nguyen, LiveScience Staff Writer

posted: 12 October 2007 01:27 pm ET

Researchers have found a tiny mechanism deep inside the ear that likely helps us hear whispers. The finding could eventually help companies design better hearing aids and other devices for restoring hearing.

Scientists probed the cochlea, a part of the inner ear where physical sound is translated into electrical signals for the brain.

For the full article:

http://www.livescience.com/hea.....anism.html
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Wed Oct 24, 2007 2:22 pm    Post subject: Hearing changes how we perceive gender Reply with quote

Northwestern University
24 october 2007

Hearing changes how we perceive gender

EVANSTON, Ill. --- Think about the confused feelings that occur when you meet someone whose tone of voice doesn’t seem to quite fit with his or her gender.

A new study by neuroscientists from Northwestern University focuses on the brain’s processing of such sensory information about another’s gender to examine whether hearing fundamentally changes visual experience.

The study concludes that it does, weighing in with findings that contribute to provocative evidence about multi-sensory processing of our world that has been emerging in recent years.

“Auditory-Visual Cross-Modal Integration in Perception of Face Gender,” was published in a recent issue of Current Biology. The study’s co-authors are investigators at Northwestern’s Visual Perception, Cognition and Neuroscience Laboratory: lead author Eric Smith, graduate student, Marcia Grabowecky, research assistant professor of psychology, and Satoru Suzuki, associate professor of psychology in the Weinberg College of Arts and Sciences at Northwestern.

“Researchers have long thought that one part of the brain does vision and another does auditory processing and that the two really don’t communicate with each other,” said Grabowecky. “But emerging research suggests that rich information from different senses come together quickly and influence each other so that we don’t experience the world one sense at a time.”

The Northwestern study suggests that sensory interactions are happening at a very early level and tones of voices indeed fundamentally change visual processing.

“For our study, we used simple tones with no explicit gender information to get a window into how vision and audition work together to process gender information,” Grabowecky said. “Unlike stereotypical voices, the tones only hinted at male and female characteristics, and by coupling them with ambiguous faces, we were able to see how processing of various pitches affected vision very early in the sensory process.”

The study builds upon scarce scientific evidence supporting the idea that sounds can alter how masculine or feminine a person looks.

“Our vision can bias our experience of other senses, such as hearing,” said Smith. “We hear, for example, the ventriloquist’s voice coming from the dummy. In this study we wanted to see if hearing could change our visual experience.”

“We learn early on what auditory and visual characteristics accompany female and male voices, starting with our earliest experiences with our mothers and fathers,” said Grabowecky. “The question from the neuroscience perspective is when in the processing of perceptual information do auditory and visual senses interact with each other? How does the brain do this?”

To test whether a sound can influence perception of a face’s gender, the researchers digitally morphed male and female faces to create androgynous faces not easily categorized as male or female. Study participants were asked to look at the faces while listening to brief auditory tones, which fell within the fundamental speaking frequency range of either male or female voices.

In the initial stage of auditory processing, sounds are decomposed into basic frequency components, the lowest one called the fundamental frequency and higher ones called the harmonics. The fundamental frequency in the human voice typically falls between about 100 to 150 Hz for males and 160 to 300 Hz for females. Roughly speaking, the fundamental frequency determines the perceived pitch (lower for men and higher for women), and the harmonics add timbre (the quality of human voice).

In higher auditory brain areas, these frequencies are put back together to be coded as a human voice. The researchers took advantage of the fact that pure tones can be used to deliver individual frequency components that are registered in early auditory brain areas.

The findings showed that when an androgynous face was paired with a pure tone that fell within the female fundamental-frequency range, people were more likely to report that the ambiguous face was that of a female. But when the same face was paired with a pure tone in the male fundamental-frequency range, people were more likely to see a male face. (The bias did not occur when a face was paired with a pure tone that was too low or too high to be in the typical speaking range.)

“The strength of the study is that pure tones sound like beeps, and they primarily activate early stages of auditory processing,” Grabowecky said. “We think that the effect demonstrates a direct input from early auditory processing to visual perception.”

When people were forced to guess whether the tones were in the male range, the female range or outside of the typical speaking frequency range, their guesses were inaccurate and relative. In other words, when people heard a pair of pure tones, they tended to hear the higher tone to be feminine and the lower tone to be masculine regardless of the actual frequencies of the tones.

“Such relativity is not surprising, because our auditory experience depends on relative, rather than absolute, frequencies as most useful and entertaining auditory information, such as speech and music, is carried by how sound frequencies change over time,” Grabowecky said.

Absolute frequencies do not matter much, as we readily understand speech spoken by people with low and high voices and enjoy songs regardless of the keys in which they are played. In contrast, it is the “neglected” absolute-frequency information that influences visual perception of gender.

“A conscious impression of your voice is not what enhances your look of masculinity or femininity,” said Suzuki. “Sounds seem to influence visual gender in a much more fundamental way on the basis of their absolute frequencies processed in early auditory brain areas.”

The researchers focused on gender perception, because people have such a strong need to categorize people as male or female. “We all know the feeling of meeting a person who is very androgynous,” said Smith. “We simply need to know and will use any information at our disposal to identify a person’s gender. It is probably quite evolutionarily adaptive to be able to accurately tell males from females, as far as propagation of one’s genes is concerned.”

What is on the horizon?

“If sound can implicitly bias visual gender perception, then we need to consider whether other senses, such as smell, might yield similar effects,” said Smith. “Future studies might use masculine and feminine colognes, or even human pheromones to bias people to see androgynous faces as either male or female. With the possibility of other senses biasing the way that we see the world, our visual experience of gender might turn out to be much more than meets the eye.”


###
NORTHWESTERN NEWS: www.northwestern.edu/newscenter/
Back to top
View user's profile Send private message Visit poster's website
adedios
SuperPoster


Joined: 06 Jul 2005
Posts: 5060
Location: Angel C. de Dios

PostPosted: Wed Oct 31, 2007 2:04 pm    Post subject: Hearing Things? New Study Might Explain Why Reply with quote

Hearing Things? New Study Might Explain Why
By Jeanna Bryner, LiveScience Staff Writer

posted: 31 October 2007 02:00 pm ET

Shhh! Did you hear that? The ghostly whispers that grab your attention could be the result of chit-chatting nerve cells in your ears that were there in the womb.

The finding, reported in the Nov. 1 issue of the journal Nature, has implications for treating a phenomenon called tinnitus in which people hear annoying high-pitched sounds with no apparent source.

For the full article:

http://www.livescience.com/hea.....-ears.html
Back to top
View user's profile Send private message Visit poster's website
Display posts from previous:   
Post new topic   Reply to topic   printer-friendly view    USAP PAETE Forum Index -> Science Lessons Forum All times are GMT - 5 Hours
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You can post new topics in this forum
You can reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group