Startseite Israel Society for Auditory Research (ISAR) 2014 Annual Scientific Conference
Artikel Öffentlich zugänglich

Israel Society for Auditory Research (ISAR) 2014 Annual Scientific Conference

Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel October 21, 2014
Veröffentlicht/Copyright: 5. August 2014

ISAR Scientific Committee

Prof. Karen B. Avraham (President), Prof. Joseph Attias (Treasurer), Dr. Cahtia Adelman, Dr. Karen Banai, Dr. Leah Fostick, Dr. Yael Henkin, Prof. Liat Kishon-Rabin, Dr. Limor Lavie, Prof. Michal Luntz, Dr. Ronen Perez

ISAR Conference Organizing Committee

Dr. Karen Banai, Dr. Yael Henkin, Prof. Karen B. Avraham

Abstracts, original articles and reviews to be published in the Special Issue on Auditory Research of the Journal of Basic and Clinical Physiology and Pharmacology

Guest Co-editors: Prof. Karen B. Avraham, Dr. Karen Banai

Perception of extrema in glissandos and double glissandos as a function of extent and duration

N. Amir, M. Moran and L. Lagziel

Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel

Perception of dynamic tones is important for correct interpretation of pragmatic intent, as expressed through changes in intonation. The raw acoustic values of the F0 contour, however, are not necessarily those perceived by the listener. This has been demonstrated in previous studies on trained musicians, which have shown that the perceived F0 (pitch) of synthesized steadily rising or falling tones, termed “glissandos,” is a weighted average over approximately the last third of the tone. The present study was intended to examine a) how these results generalize to “double glissandos” having a rising-falling or falling-rising contour, and b) how they are perceived by the general population rather than musicians. Nineteen subjects heard 36 single and double glissandos, with different durations (100-600 ms) and extents (3, 6 and 12 semitones). They were asked to match a steady tone to the pitch extrema. Results show that listeners have a strong tendency to judge the maximum of rising and rising-falling glissandos as 6 semitones, regardless of duration and actual extent, and the minimum of falling and falling-rising glissandos as less than 3 semitones. This low sensitivity of the listeners is surprising in view of how such perception capabilities are required in daily communication. Further research is necessary to determine whether this is due to subjects’ difficulties with the research paradigm itself, or whether it reflects the true capabilities of the general population as opposed to musicians.

GPSM2/LGN is a planar cell polarity effector and is required for hearing and stereocilia morphogenesis in mice

Y. Bhonker1, S. Shivatzki1, K. Ushakov1, L. Amir2, T. Elkan-Miller1, S. Myoung Kim3, P. Chen3, F. Matsuzaki4, D. Sprinzak2 and K. B. Avraham1

1Department of Human Molecular Genetics and Biochemistry, Sackler Faculty of Medicine and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel;2Department of Biochemistry, Wise Faculty of Life Sciences, Tel Aviv University, Tel Aviv, Israel;3Department of Cell Biology, School of Medicine, Emory University, Atlanta, GA, USA;4Laboratory for Cell Asymmetry, RIKEN Center for Developmental Biology, Japan

Mutations in GPSM2, also called LGN, were previously identified in human families showing severe-to-profound congenital hearing loss. The human mutations described to date are all frameshift mutations leading to premature truncation of the protein. Lgn is a key gene in the spindle orientation pathway and its role is conserved among tissues and along evolution. Unlike other tissues where Lgn is active, the organ of Corti is postmitotic, suggesting a different role for Lgn.

To understand the molecular basis of Lgn function in the inner ear, we obtained a mouse model containing a truncation of the protein’s C-terminal motifs, responsible for specific protein-protein interactions (LgnΔC/ΔC). As with the human patients, LgnΔC/ΔC mice are profoundly deaf. Gene expression analysis demonstrates increased expression of the transcript in mutant mice, suggesting either a longer mRNA half-life or that the C-terminus is involved in a transcriptional auto-inhibition mechanism. Western blot analysis detected a stable truncated protein in both heterozygous and homozygous mice. Scanning electron microscopy and immunofluorescence show that stereocilia in hair cells fail to adopt the characteristic V-shape required for hearing, showing either flattened or split phenotypes. Slight stereocilia misorientation suggests that Lgn interacts with the planar cell polarity (PCP) pathway, a signaling network known to control stereocilia orientation. Indeed, a PCP mutant shows a misoriented Lgn compartment in hair cells; however Lgn mutants do not show changes in PCP protein expression or localization. Our findings join other reports in supporting Lgn’s role as an effector of the PCP pathway and a regulator of stereocilia morphogenesis.

Research supported by the Human Frontier Science Program RGP0012/2012.

Next generation sequencing leads to discovery of new pathogenic variants in the Middle East

M. Birkan1, Z. Brownstein1, A. Abu-Rayyan2, O. Isakov3, M. Sokolov1, F.T.A. Martins1, N. Danial-Farran1,4, S. Shalev4, M. Frydman1,5, N. Shomron3, M. Kanaan2 and K. B. Avraham1

1Department of Human Molecular Genetics and Biochemistry;2Department of Biological Sciences, Bethlehem, University, Bethlehem, Palestinian Authority;3Department of Cell and Developmental Biology. Sackler Faculty of Medicine and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel;4Genetics Institute, Ha’Emek Medical Center, Afula, Israel and Rappaport Faculty of Medicine, Technion-Israel Institute of Technology, Haifa, Israel;5Sheba Medical Center, Tel Hashomer, Israel.

Identifying new genes is a challenge in human genetics, particularly in heterogeneous pathologies such as deafness. NGS has become the optimal method for discovery of new deafness genes and their pathogenic variants in the Middle Eastern and other populations. We used NGS to identify genes responsible for hereditary hearing loss in Middle Eastern deaf patients. The genes included those that have been found to be associated with hearing loss and human orthologous of genes related to deafness in the mouse. About a third of the cases were solved by these experiments, doubling the number of deafness genes in the Middle Eastern population. These included mutations in OTOF, TMC1, USH2A, SLC26A4, MYO7A, MYO15A, TECTA, COCH and POU3F4. Several variants in genes could not be resolved, for example, in the genes ATP4B and ATOH1. While the prediction scores strongly suggested the variants would be pathogenic, the small family structure precluded using segregation of the variant as criteria. The discovery of these variants have worldwide implications, since many mutations first found in the Middle East have turned out to be present in other populations. With respect to research, these variants have provided insight into the mechanisms of deafness.

Function of ELMOD1, an Arf GTPase signaling protein, in the mammalian inner ear

S. Chechik1, S. Shivatzki1, E. Richard2, S. Riazuddin2, R.A. Kahn3 and K.B. Avraham1

1Department of Human Molecular Genetics and Biochemistry, Sackler Faculty of Medicine and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel;2Department of Otorhinolaryngology, School of Medicine, University of Maryland, Baltimore, MD, USA;3Department of Biochemistry, Emory University School of Medicine, Atlanta, GA, USA

Members of the Ras superfamily of GTPases are each regulated by GTPase activating proteins (GAPs), which often possess effector functions. The Arf family of regulatory GTPases is made up of the Arf, Arl and Sar proteins. GAPs for members of the ARF family include the 11 families with the ARF GAPs and the more recently discovered family of ELMO domain-containing proteins, ELMODs. Three mammalian ELMODs (ELMOD1-3) have been described, and alterations in ELMOD3 are associated with human nonsyndromic hearing impairment (Jaworek et al. 2013. PLoS Genetics 9:e1003774). We describe a new gene-targeted mutation for another family member, ELMOD1. Auditory brain response (ABR) analysis was used to measure hearing function and revealed that the ELMOD1 homozygous mutant mice are deaf as compared to wild type and heterozygous control littermates. The mice also exhibited severe behavioral defects indicative of vestibular dysfunction. Scanning electron microscopy revealed striking morphological defects in the cochlear hair bundles, as well as degeneration of the hair cells. Expression levels of ELMOD1 mRNA derived from cochlear and vestibular sensory epithelium of wild-type mice revealed an age-dependent pattern of Elmod1 expression. These results indicate a role for ELMOD1 in the inner ear and lead us to speculate that at least one member of the ARF family itself lies immediately upstream of ELMOD1 in a signaling pathway that is critical for proper auditory and vestibular function.

Bone conduction thresholds without bone vibrations

S. Chordekar1, C. Adelman2 and H. Sohmer3

1Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Chaim Sheba Medical Center, Tel Hashomer, Israel;2Speech & Hearing Center, Hadassah University Hospital, Jerusalem, Israel;2Department of Communication Disorders, Hadassah Academic College, Jerusalem, Israel;3Department of Medical Neurobiology (Physiology), Hebrew University-Hadassah Medical School, Jerusalem, Israel

The clinical bone vibrator elicits auditory sensation when it is applied to skin sites not overlying skull bone e.g. neck and thorax. This is called soft tissue conduction (STC) and non-osseous bone conduction (BC). Classical BC, based on skull vibrations, leads to vibrations of the bony walls of the external, middle and inner ears. To assess the mechanisms of STC stimulation compared to classical BC, we determined the thresholds of normal humans to bone vibrator stimulation applied at the forehead with a standard application force of 5 Newton (head band). These thresholds were compared to the minimal bone vibrator stimulus intensities applied to the forehead (5 N force) of a dry human skull required to produce skull bone vibrations (measured by a sensitive laser vibrometer). The human thresholds were 7 dB (0.5 kHz), 26 dB (1.0 kHz), 17 dB (2.0 kHz) and 27 dB (4.0 kHz), significantly lower than the minimal stimulus intensities producing skull vibrations. Thus it seems that threshold was induced at intensities lower than those which produce skull bone vibrations. This supports the suggestion that hearing at threshold is induced by the STC (non-osseous BC) component of BC at the stimulation site.

Two genes associated with hearing loss in a consanguineous family are identified by next-generation sequencing

N. Danial-Farran1,2, Z. Brownstein2, O. Yizhar-Barnea2, K. B. Avraham2 and S. Shalev1

1Genetics Institute, Ha’Emek Medical Center, Afula, Israel and Rappaport Faculty of Medicine, Technion-Israel Institute of Technology, Haifa, Israel;2Department of Human Molecular Genetics and Biochemistry, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel

A significant acceleration in deafness-gene discovery has been made with Next-Generation Sequencing (NGS). Nevertheless, a significant portion of hereditary hearing loss remains unsolved. This is particularly true of the Middle Eastern population, with many different ethnic groups and high rates of consanguinity. We performed a search for the genetic basis for hearing loss in a large family with several consanguineous marriages, Family H. Three members of Family H underwent whole exome sequencing (WES). Segregating variants were validated by Sanger sequencing. We discovered two variants in two different genes, OTOF and SLC25A21, in different branches of the family. The OTOF mutationwas previously found to lead to auditory neuropathy. Subsequent clinical analysis of this branch of Family H determined that they too have auditory neuropathy. The second branch harbored a variant in SLC25A21, a mitochondrial transporter. Functional assays are ongoing to determine the pathogenicity of the SLC25A21 mutation and its contribution to hearing impairment. The results demonstrating that two different genes might be responsible for hearing loss in the same family with high rates of consanguinity emphasizes the utility of WES for resolving the genetic basis of hearing loss. However, this search also highlights the complexity of determining the causative mutation. Functional assays to evaluate the defective proteins will provide precise conclusions.

Human audiometric phenotypes of age-related hearing loss

J.R. Dubno

Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina, USA

A significant result from studies of age-related hearing loss involves the degeneration of the cochlear lateral wall, which is responsible for producing and maintaining the endocochlear potential (EP). Age-related declines in the EP systematically reduce the voltage available to the cochlear amplifier, which reduces its gain more so at higher than lower frequencies. This “metabolic presbyacusis” largely accounts for age-related threshold elevations observed in laboratory animals raised in quiet and may underlie the characteristic audiograms of older humans: a mild, flat hearing loss at lower frequencies coupled with a gradually sloping hearing loss at higher frequencies. In contrast, sensory losses resulting from ototoxic drug and noise exposures typically produce normal thresholds at lower frequencies with an abrupt transition to thresholds between 50-70 dB at higher frequencies. In addition to audiograms, supporting evidence of metabolic and sensory presbyacusis phenotypes in older humans can be derived from demographic information (age, gender), environmental exposures (noise and ototoxic drug histories), and suprathreshold auditory function beyond the audiogram. These results and others support the assumption that, in the absence of noise or ototoxic drug exposures, age-related hearing loss should be viewed as a vascular, metabolic, neural disorder rather than a sensory disorder. [Work supported by NIH/NIDCD]

Speech recognition across the lifespan: Longitudinal changes from middle age to older adults

J.R. Dubno

Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina, USA

Many large-scale studies of speech recognition in older adults report age-related declines, but these conclusions are often based on cross-sectional analyses that reveal differences in scores between age groups. Results are inconsistent with regard to contributions of age, gender, and health-related co-morbidities. Contradictory findings may also relate to cohort differences, which may confound group differences in cross-sectional studies. Most importantly, interpreting age-group differences in speech recognition is complicated by pure-tone thresholds and cognitive function that may change with increasing age and rates of change that vary among individuals. In a longitudinal study, subjects serve as their own controls, thus minimizing effects of uncontrollable factors, such as noise and health history, and occupation. Moreover, longitudinal studies can measure age-related changes in hearing and speech recognition for groups and for individuals. As part of a longitudinal study of age-related hearing loss at the Medical University of South Carolina, which includes many biologic, auditory, and cognitive measures, pure-tone thresholds and word recognition in quiet and babble are being measured in a large sample of adults yearly or every 2-3 years. For the current analysis, the subject sample included 1,220 adults whose ages ranged from 40s to 90s at the time of enrollment, with longitudinal data spanning up to 25 years, including >16,000 scores; data analyses used generalized linear mixed models. Changes in hearing loss and speech recognition over time will be reported and associations with gender, age, degree of hearing loss, and auditory and cognitive function will be discussed. [Work supported by NIH/NIDCD]

Hearing gain and otoacoustic emissions outcome of stapedotomy: does the prosthesis diameter matter?

N. Faranesh1,2, E. Magamse1, S. Zaaroura1, R. Zidane1,2 and A. Shupak2,3

1Department of Otolaryngology Head and Neck Surgery, Hopital Francais Saint Vincent De Paul, Nazareth, Israel;2Unit of Otoneurology, Lin Medical Center, Haifa, Israel;3Bruce Rappaport Faculty of Medicine, Technion-Israel Institute of Technology, Haifa, Israel

Background: Stapes surgery has recently evolved to a small fenestra stapedotomy decreasing the occurrence of perilymphatic fistula and post-operative vertigo and improving high tone hearing results. However, there is still a debate on the optimal diameter of the prosthesis to be used.

We compared hearing and otoacoustic emissions results employing 0.4 and 0.6 mm prostheses.

Methods: Prospective randomized single blinded study. Candidates for stapedotomy were assigned to insertion of either 0.4 mm or 0.6 mm prosthesis. All were operated by the same surgeon employing identical technique. Pure tone and speech audiometry, transient evoked (TEOAEs) and distortion product otoacoustic emissions (DPOAEs) were recorded before surgery, 3 and 12 months post-operatively.

Results: Results for 10 patients who were followed for 3 & 12 months are reported. The average bone conduction gain was 0.4±4.8 and 8±6.88 dBHL (p=0.053) after 3 month; -2±5.05 and 6.75±3.6 dBHL (p=0.014) after 12 month for the 0.4 and 0.6 mm prostheses respectively. The average over-closure reached 0±7.6 and 8.64±5.45 dBHL (p=0.06) after 3 month; -2.02±5.2 and 6.99±3.63 dBHL (p=0.013) after 12 month for the 0.4 and 0.6 mm prostheses respectively. No differences were found in the air conduction, air-bone gap and discrimination gains. Also, no differences were found in nither average SNR TOAEs and DPOAEs responses or in the number of patients having detectable TEOAEs and DPOAEs post-operatively.

Conclusion: The results show statistically significant differences in bone conduction gain and over-closure after 12 months in favor of the 0.6mm prosthesis. These imply advantage in bone conduction as well as in the resonance characteristics of the 0.6 mm prosthesis explaining the better over-closure effect.

The effect of stimulus parameters on performance of spectral temporal order judgment (TOJ)

L. Fostick1 and H. Babkoff2

Department of Communication Disorders, Ariel University, Ariel, Israel; Department of Psychology, Ashkelon Academic College, Ashkelon, Israel

Background: Spectral and spatial temporal order judgment (TOJ) paradigms were designed to measure auditory temporal processing. We suggest that spectral TOJ performance involves at least two different perceptual mechanisms: 1) the holistic judgment of the pattern of two tones (thresholds of ISI<5msec); and 2) the direct judgment of the order of the tones (thresholds of ISI=60-90msec). In this study, we examined the effect of frequency range and stimulus duration on participants’ performance of spectral TOJ.

Methods: 193 students performed spectral TOJ including tone pairs with either of the following frequencies (all with tone duration of 15msec): 300-600Hz, 600-1200Hz, 1-1.1kHz, 1-1.8kHz, 1-2kHz, 1-3.5kHz, 3-6kHz, or with either of the following tone durations (all with frequencies 1-1.8kHz): 5, 10, 15, 20, 30, 40msec.

Results: Low frequency tone-pairs (300-600Hz) increased ISI<5msec thresholds and higher range tone-pairs (1-3.5kHz, 3-6kHz) increased ISI=60-90msec thresholds. Tone durations of 10msec or longer increased ISI<5msec thresholds, but shorter tone durations (5msec) increased ISI=60-90msec thresholds.

Conclusions. These results might suggest that the spectral cue is more dominant than the temporal cue when performing spectral TOJ.

Thresholds measured at non-osseous cartilage sites via indirect contact with bone vibrator support the soft tissue conduction mechanism

M. Geal-Dor 1,2, C. Adelman1,2 and H. Sohmer3

1Speech and Hearing Center, Hadassah University Hospital, Jerusalem, Israel;2Department of Communication Disorders, Hadassah Academic College, Jerusalem, Israel;3Department of Medical Neurobiology (Physiology), Institute for Medical Research Israel-Canada, Hebrew University-Hadassah Medical School, Jerusalem, Israel

In addition to AC and BC auditory stimulation, an additional mode of stimulation has been described in which the clinical bone vibrator is applied to non-osseous sites. This mode has been termed soft tissue conduction (STC). In order to gain insight into the mechanism(s) of STC, in 10 subjects the thresholds to 1kHz and 2kHz were measured at three test sites: one bony site (osseous BC) the mastoid, and two non-osseous sites on the external ear: the skin at the cavum concha and the tragus of the pinna. The thresholds at these sites were determined via indirect contact achieved by applying the vibrator in a layer of ultrasound gel without direct contact with the skin. Threshold at the non-osseous sites were only 11 to 19 dB (depending on frequency) greater than the threshold at the nearby mastoid. Controls: bone vibrator held in the air without any contact with the skin showed threshold was significantly higher than via gel, confirming that the subjects were responding to the stimulation at the STC, and not to AC sounds from the bone vibrator. Furthermore, the threshold for direct contact with the skin at the non-osseous site was lower significantly than for the indirect contact via gel at the same site, confirming no direct contact was made. These results reinforce the previous findings that led to the suggestion that the threshold to BC stimulation at the mastoid or forehead is actually that resulting from stimulation of the skin and soft tissues (non-osseous -STC) overlying the BC site.

Should loudness summation be considered in the programming of children implanted simultaneously with bilateral cochlear implants

R. Kaplan Neeman1,3, C. Muchnik1,3, Z. Yakir1, F. Bloch1, L. Lipshutz1, Y. Shapira2, L. Migirov2, M. Hildesheimer1,3 and Y. Henkin1,3

1Hearing, Speech & Language Center and2Department of Otolaryngology, Head and Neck Surgery, Sheba Medical Center, Tel Hashomer, Israel;3Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel

An increasing number of children are currently implanted with bilateral cochlear implants simultaneously. When providing bilateral acoustic amplification, binaural loudness summation is taken into account and implemented in prescriptive fitting rationales such as NAL-NL1/2. In bilateral CI programming, however, there are no specific prescriptive rationales. Programming methods based on objective measures as the electrically evoked compound action potentials (ECAPs) have been developed to assist in programming of infants and young children. These programming methods, however, have been developed in unilateral CI recipients and their utilization in the programming of bilateral CI users is not straightforward. The purpose of the present study was, therefore, to retrospectively evaluate ECAPs thresholds in relation to behavioral programming levels in children with bilateral CI vs. those with unilateral CI. For this purpose we studied two groups of children that were matched for age at implantation (10-36 months) and duration of implant use (at least 6 months). Group 1 included children with bilateral CI implanted simultaneously, and group 2 included children with unilateral CI. The relations between ECAP thresholds and behavioral Comfortable levels (C) were compared between the groups at basal, medial and apical electrodes. Results indicated that in the unilateral group C levels were higher than ECAP thresholds, similar to previous findings in adults CI recipients. In contrast, in the bilateral group, C levels were comparable to ECAP thresholds. These results may reflect loudness summation and thus support the need for a binaural loudness summation “correction” for children with bilateral CI implanted simultaneously.

Auditory learning in normal hearing and hearing impaired older adults

H. Karawani1,2, T. Bitan1, J. Attias1 and K. Banai1

1Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa;2Rambam Health Care Campus, Speech and Hearing Center, Department of Otolaryngology and Neck and Head Surgery, Haifa, Israel

Speech perception and communication in noisy environments become more difficult as we age. These difficulties are further exacerbated by age-related hearing loss (ARHL). Auditory training can improve speech perception in normal hearing young adults. Whether the same is true for older adults, especially those with ARHL, is not clear. The aim of the current study was to compare the outcomes of training on speech perception under adverse conditions between normal-hearing older adults (n = 12) and those with ARHL (n = 16). Participants received 4 weeks of home-based auditory training designed to improve speech perception in adverse listening conditions: (1) Speech-in-noise (2) time compressed speech and (3) competing speakers. Significant learning effects were found in both groups. Although normal hearing listeners outperformed ARHL participants across conditions both before and after training, the amount of learning was similar in the two groups. We conclude that ARHL does not interfere with perceptual learning. [Work supported by Marie Curie IRG]

Selective auditory attention: do long-term users of cochlear implants and hearing aids match performance of normal hearing?

L. Kishon-Rabin1, R. Salem2, JY. Sichel2, R. Perez2 and O. Segal1

1Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel;2Department of Otolaryngology, Head and Neck Surgery, Shaare Zedek Medical Center, Jerusalem, Israel

The purpose of the present study was twofold: (1) to compare auditory selective attention of prelingual hearing-impaired (HI) listeners who use cochlear implants (CI) to those fitted with hearing aids (HA), and (2) to determine whether these HI differed from normal hearing (NH). Thirteen CI users (mean=19yrs), eight HA users (mean=30yrs) and 13 NH (mean=25yrs) participated in a lexical–prosody emotional auditory stroop task. All HI were habilitated at a young age and use their device for a minimum of 15 years. Test stimuli were words with a sad or happy meaning produced with the appropriate (congruent) or inappropriate (incongruent) happy/sad prosody. Listeners were instructed to either ignore the lexical content and decide whether the tone of voice was happy or sad, or, ignore the prosody and make a lexical decision. Performance measures were % correct and reaction time (RT). Participants were also tested on memory, language and speech perception abilities. The results showed both groups performed (in %) significantly worse than the NH on lexical and prosody attention tasks. No differences in % were found between the HI groups despite the fact that HA users perceived prosody better than the CIs. Reaction times were longest for the CIs and shortest for the NH. Strong stroop effects (in RT) were found for both HI groups and these were correlated with visual attention. Overall, our findings suggest that despite early habilitation and long–term use of a sensory device, HI users were less efficient than NH peers in selectively attending to lexical or prosodic information.

From sentences to words: perceptual learning of time-compressed sentences generalizes to single time-compressed words

M. Menheim, M. Shoshany and K. Banai

Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel

The identification of time-compressed speech, an artificially created form of rapid speech, improves rapidly with exposure to a few time-compressed sentences. This rapid learning was shown to continue and increase with further training. Nevertheless, it is yet unknown weather training on time-compressed sentences generalizes to individual words. The goal of the current study was therefore to determine whether training on time-compressed sentences generalizes to single time-compressed words that appear in the trained sentences and whether this learning transfers to novel words. Twenty listeners participated in a pre-test in which they transcribed a series of 20 sentences and a post-test in which they transcribed the same sentences again as well as 80 individual words (half of which were taken from the sentences). Between the pre- and post-tests half the listeners completed three training sessions in which they had to verify the semantic plausibility of 300 adaptively-compressed sentences. Trained listeners improved significantly more than untrained ones between the two tests. They also outperformed untrained listeners on the identification of single words (including the novel ones). Therefore, these data suggest that training on time-compressed sentences induces generalization to individually compressed words that is not limited to words encountered during training. [Work supported by the National Institute of Psychobiology in Israel]

Different patterns of spatial release from masking in first and second language

N. Omar, M. Taha’a, L. Lavie and K. Banai

Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel

Models of non-native speech perception suggest that native- and non-native speakers might give different weights to acoustic and linguistic cues, especially under adverse listening conditions. Here we ask whether the pattern of spatial release from masking differs between native and highly-proficient non-native listeners. To this end, speech perception was tested in 20 native Hebrew speakers and in 24 non-native speakers (L1: Arabic) under three conditions differing in the spatial separation of the target (bi-syllabic Hebrew words) and the masker (4-talker babble noise): 1) frontal-masking-target and masker presented from the same (frontal) location; 2) unilateral-making-target presented from a frontal speaker and masker presented from either +45° or -45°; 3) bilateral-masking-target presented from a frontal speaker and masker presented simultaneously from +45° and -45°. In both native and non-native speakers speech perception was more accurate in the unilateral-masking condition than in the frontal-masking condition. However, in contrast to native speakers who performed less accurately in the bilateral-masking condition than in the frontal-masking condition, the perception of non-native listeners actually improved with bilateral-masking. These data suggest that non-native listeners use subtle acoustic cues to a greater extent than native listeners when listening to speech in adverse conditions.

Decoding speech perception from single cell activity in humans

O. Ossmy1,2, I. Fried3,4 and R. Mukamel1,2

1Sagol School of Neuroscience,2School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel;3Functional Neurosurgery Unit, Tel Aviv Medical Center and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel;4Department of Neurosurgery, David Geffen School of Medicine and Semel Institute for Neuroscience, University of California at Los Angeles (UCLA), Los Angeles, CA, USA

Deciphering the content of continuous speech is a challenging task performed daily by the human brain. Here, we tested whether activity of single cells in auditory cortex could be used to support such a task. We recorded neuronal activity from auditory cortex of two neurosurgical patients while presented with a short video segment containing speech. Population spiking activity (∼20 cells per patient) allowed detection of word onset and decoding the identity of perceived words with significant high accuracy levels (range 57-73%). Oscillation phase of local field potentials (8–12Hz) also allowed decoding word identity although with lower accuracy levels (range 22-57%). Our results provide evidence that the spiking activity of a relatively small population of cells in human auditory cortex allows accurate deciphering of the content of ongoing speech and may have implications for developing brain-machine interfaces for patients with deficits in speech production.

Perceived loudness of self-generated sounds is differentially modified by expected sound intensity

D. Reznik1,2, Y. Henkin3,4, O. Levy2 and R. Mukamel1,2

1School of Psychological Sciences,2Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel;3Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel;4Hearing, Speech, and Language Center, Sheba Medical Center, Tel Hashomer, Israel

Our everyday environment provides an abundance of different auditory stimuli in varying intensities (e.g., soft whispering or a startling vehicle horn). Research over the last several years suggests that both physiological and behavioral responses to auditory stimuli that are the direct consequence of our own actions are modified relative to an otherwise identical stimulus perceived in a passive manner. Such modification has been suggested to occur through a corollary discharge sent from motor to sensory cortices during voluntary actions. At the behavioral level, some studies report attenuated responses to self-generated sounds, while others report enhanced responses. Therefore the factors that govern the type of modifications in perceptual sensitivity are unclear. In the current study, we examined whether expected intensity of the generated sound has a role in the type of such modification. To this end we used an auditory comparison task in which normal hearing subjects (N = 19) were asked to decide which of two identical, consecutively presented1kHz tones was louder. The first tone was triggered by a button press whereas the second tone was externally triggered by a computer. The experiment consisted of blocks in which tones were presented near hearing threshold (5dBsensation level) or at a supra-threshold level (75dBHL). Percent of reports (1st sound louder than 2nd) across the two presentations was taken as the dependent measure. We show that in near-threshold intensities, self-generated tones are perceived louder compared with externally-generated tones. Moreover, we demonstrate that at supra-threshold intensities the shift in perceived loudness is inverted; as further supported by a significant interaction between perceived tone loudness and sound intensity. Our results are compatible with previous models suggesting that a corollary discharge sent from motor cortex modifies activity in sensory cortex and sensory perception. Current results further demonstrate that this modification acts in an adaptive fashion that is governed by the expected intensity of self-generated action consequences.

Next-generation sequencing of small RNAs from inner ear sensory epithelium identifies microRNAs and defines regulatory pathways

K. Ushakov1, A. Rudnicki1, O. Isakov2, S. Shivatzki1, I. Weiss1, L.M. Friedman1, N. Shomron2 and K.B. Avraham1

1Department of Human Molecular Genetics and Biochemistry,2Department of Cell and Developmental Biology, Sackler Faculty of Medicine and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel

In recent ears it is becoming more apparent that there are major species in the mammalian genome that are responsible for the regulation of expression of its genes, which comprise only 2% of the genetic information. One such family is the microRNAs (miRNAs), which are small ∼22nt long molecules that upon binding to target RNA transcripts can induce its destabilisation and degradation. It has been shown that miRNAs are essential for hearing and balance and mutations can cause deafness in human and mice.

To dissect the function of miRNAs in the inner ear, we performed high-throughput sequencing (RNA-Seq) on RNA isolated from mouse inner ear sensory epithelia. The sequencing led to the identification of 455 miRNAs in both cochlear and vestibular sensory epithelium, with 30 and 44 miRNAs found solely in the cochlea or vestibule, respectively. One example is miR-6715a-5p and -3p found in the intron of the gene Tectb, which is associated with deafness. Temporal and spatial expression revealed expression in the cochlear and vestibular epithelia and spiral and vestibular ganglia. Arhgap12, a protein of the RhoGAP family, was found to be a target of miR-6715a-3p, implicating this miRNA-target pair in cell adhesion, morphogenesis and actin reorganization in the inner ear.

Research supported by the Israel Science Foundation (grant no. 1320/11)

Auditory skill learning is less susceptible to modifications in children as compared to adults

Y. Zaltz1, D. Ari-Even Roth1, A. Karni2,3 and L. Kishon-Rabin1

1Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Israel;2Department of Human Biology, Faculty of Natural Sciences & The E.J. Safra Brain Research Center for the Study of Learning & Learning Disabilities, Faculty of Education, University of Haifa, Israel;3Division of Diagnostic Radiology, Chaim Sheba Medical Center, Tel Hashomer, Israel

Recent studies showed that children can gain from training on an auditory task, and retain those gains, with almost no forgetting, over a long period. In the real world, however, children are often required to consecutively learn several tasks, thus raising the possibility that training on one task will modify the gains induced by training on a different task. Our purpose was to explore the susceptibility of learning following single- and multi- session training on auditory frequency discrimination (FD) to subsequent training on a different task. Forty-five adults (18-30y) and forty children (7-9y) received FD training at 1KHz that was followed by an additional training on one of three tasks: FD at 1KHz in the opposite ear, FD at 2KHz, or FD at 1KHz in background noise. Six FD thresholds were obtained in each of two sessions using a forced-choice adaptive procedure. Our results show that the learning-gains in a frequency discrimination task of 7-9 year olds and of adults were not disrupted by subsequent training on a different task. Furthermore, when subsequent training was given on all different tasks following a single training session, all adults improved their performance on the first task. In children, however, further improvement on the first task was evident only following the FD training at 2KHz. These results may suggest that the training-induced gains of 7-9 years old children are less susceptible to modifications compared to adults, possibly because of an immature reconsolidation process. These results shed light on the learning processes that occur in real life.

Otoacoustic emissions (OAE) in the prediction of sudden sensorineural hearing loss outcome

R. Zidane1,2, R. Shemesh2 and A. Shupak1,3,4

1Unit of Otoneurology, Lin Medical Center, Haifa, Israel; 2Department of Communication Science and Disorders, University of Haifa, Haifa, Israel; 3Department of Otolaryngology Head and Neck Surgery, Carmel Medical Center, Haifa, Israel; 4Bruce Rappaport Faculty of Medicine, Technion-Israel Institute of Technology, Haifa, Israel

Introduction: The variable course of idiopathic sudden sensorineural hearing loss (ISSNHL) together with the ongoing debate about its treatment emphasizes the need for early detectable prognostic factor that may predict hearing outcome. The purpose of the present study was to evaluate the role of otoacoustic emissions (OAEs) in the prediction of ISSNHL prognosis.

Methods: 15 ISSNHL patients were prospectively followed 7 days, 14 days, and 3 months post presentation by pure tone audiometry, TEOAEs (Transient Evoked OAEs) and DPOAEs (Distortion Product OAEs) testing. The parameters measured were hearing improvement in the pure tone average of the 3 most affected frequencies, detectability, and signal to noise ratios (SNRs) of the OAEs. Sensitivity and specificity of the TEOAEs and DPOAEs results towards ISSNHL outcome were calculated.

Results: On the 3 months follow-up, patients having detectable TEOAEs on the first follow-up evaluation had average hearing improvement of 62±41% while those with no response improved only by 10±9% (p<0.05). For the DPOAEs hearing improvement results were 71±37% and 9±7% respectively (p<0.01). The sensitivity of recordable TEOAEs on the 7th day post-presentation follow-up towards the prediction of significant hearing improvement 3 months from presentation reached 83% and the specificity 100%. For the DPOAEs the corresponding values were 62.5% and 100% for the sensitivity and specificity respectively.

Conclusion: TEOAEs and DPOAEs evaluation in the early stage of ISSNHL treatment has potential role in the prediction of its outcome.


These abstracts have been reproduced directly from the material supplied by the authors, without editorial alteration by the staff of this Journal. Insufficiencies of preparation, grammar, spelling, style, syntax and usage are the authors’ responsibility.


Published Online: 2014-8-5
Published in Print: 2014-9-1

©2014 by De Gruyter

Heruntergeladen am 29.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/jbcpp-2014-0078/html
Button zum nach oben scrollen