0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The human pupil and face encode sound affect and provide objective signatures of tinnitus and auditory hypersensitivity disorders

      Preprint
      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Summary

          Central disinhibition works like an amplifier, boosting neural sensitivity to progressively weaker peripheral inputs arising from degenerating sensory organs. The Excess Central Gain model posits that central disinhibition also gives rise to cardinal features of sensory disorders including sensory overload and phantom percepts. Here, we tested predictions of this model with neural, autonomic, and behavioral approaches in participants with sound sensitivity and tinnitus (phantom ringing). We confirmed enhanced auditory neural gain but found no association with their sound aversion and anxiety. Instead, we hypothesized that symptom severity was linked to affective sound encoding. In neurotypical controls, emotionally evocative sounds elicited pupil dilations, electrodermal responses and facial reactions that scaled with valence. Participants with disordered hearing exhibited disrupted pupil and facial reactivity that accurately predicted their self-reported tinnitus and hyperacusis severity. These findings highlight auditory-limbic dysregulation in tinnitus and sound sensitivity disorders and introduce approaches for their objective measurement.

          Related collections

          Most cited references94

          • Record: found
          • Abstract: not found
          • Article: not found

          Regularization and variable selection via the elastic net

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Removing electroencephalographic artifacts by blind source separation.

            Eye movements, eye blinks, cardiac signals, muscle noise, and line noise present serious problems for electroencephalographic (EEG) interpretation and analysis when rejecting contaminated EEG segments results in an unacceptable data loss. Many methods have been proposed to remove artifacts from EEG recordings, especially those arising from eye movements and blinks. Often regression in the time or frequency domain is performed on parallel EEG and electrooculographic (EOG) recordings to derive parameters characterizing the appearance and spread of EOG artifacts in the EEG channels. Because EEG and ocular activity mix bidirectionally, regressing out eye artifacts inevitably involves subtracting relevant EEG signals from each record as well. Regression methods become even more problematic when a good regressing channel is not available for each artifact source, as in the case of muscle artifacts. Use of principal component analysis (PCA) has been proposed to remove eye artifacts from multichannel EEG. However, PCA cannot completely separate eye artifacts from brain signals, especially when they have comparable amplitudes. Here, we propose a new and generally applicable method for removing a wide variety of artifacts from EEG records based on blind source separation by independent component analysis (ICA). Our results on EEG data collected from normal and autistic subjects show that ICA can effectively detect, separate, and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably with those obtained using regression and PCA methods. ICA can also be used to analyze blink-related brain activity.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Measuring emotion: the Self-Assessment Manikin and the Semantic Differential.

              The Self-Assessment Manikin (SAM) is a non-verbal pictorial assessment technique that directly measures the pleasure, arousal, and dominance associated with a person's affective reaction to a wide variety of stimuli. In this experiment, we compare reports of affective experience obtained using SAM, which requires only three simple judgments, to the Semantic Differential scale devised by Mehrabian and Russell (An approach to environmental psychology, 1974) which requires 18 different ratings. Subjective reports were measured to a series of pictures that varied in both affective valence and intensity. Correlations across the two rating methods were high both for reports of experienced pleasure and felt arousal. Differences obtained in the dominance dimension of the two instruments suggest that SAM may better track the personal response to an affective stimulus. SAM is an inexpensive, easy method for quickly assessing reports of affective response in many contexts.
                Bookmark

                Author and article information

                Journal
                bioRxiv
                BIORXIV
                bioRxiv
                Cold Spring Harbor Laboratory
                23 December 2023
                : 2023.12.22.571929
                Affiliations
                [1 ]Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA, 02114 USA
                [2 ]Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston MA 02114 USA
                [3 ]Lead contact
                Author notes
                [^]

                Current address: Department of Speech, Language, and Hearing, The University of Texas at Dallas, 1966 Inwood Road, Dallas, TX 75235

                [*]

                Equal contribution

                Author contributions

                K.J. and D.P. conceived the project and designed the experiments. J.S. collected the data using software programmed by K.H and supported by K.J. and S.S. Data analysis was led by S.S. with contributions from K.J. Figure preparation and manuscript writing was performed by S.S. and D.P. with input from all authors.

                [4 ] Corresponding author
                Author information
                http://orcid.org/0000-0003-0991-8224
                http://orcid.org/0000-0002-5832-5137
                http://orcid.org/0000-0002-6145-4909
                http://orcid.org/0000-0002-5120-2409
                Article
                10.1101/2023.12.22.571929
                10769427
                38187580
                4161015f-443d-4ec5-8efd-a395e940c1f5

                This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which allows reusers to copy and distribute the material in any medium or format in unadapted form only, for noncommercial purposes only, and only so long as attribution is given to the creator.

                History
                Categories
                Article

                Comments

                Comment on this article