5
views
0
recommends
+1 Recommend
2 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found

      Measurement Properties and Cross-Cultural Adaptation of the De Jong Gierveld Loneliness Scale in Adults : A Systematic Review

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Abstract: This systematic review evaluated the measurement properties of the De Jong Gierveld Loneliness Scale (DJGLS) in adults. A systematic search of four electronic databases (PubMed, EMBASE, Scopus, and PsycINFO) was conducted from inception until December 2022. The COSMIN (Consensus-Based Standards for the Selection of Health Measurement Instruments) guidelines were used to assess the methodological quality and evidence synthesis of the included studies. Forty-six studies assessed the validity and reliability of the DJGLS-11 and its short version, the DJGLS-6. Very-low-quality evidence supported the content validity, moderate to high-quality evidence confirmed the structural validity and internal consistency, and low-quality evidence supported the construct validity of the two versions. Test-retest reliability was examined for the DJGLS-6 with low-quality evidence supporting excellent interclass coefficient values of 0.73–1.00. Both scales were cross-culturally adapted and translated into 18 languages across 12 countries. Although the structural validity and internal consistency of the DJGLS were supported by high-quality evidence, very-low to low-quality evidence was available for its other measurement properties. Future studies are needed to perform a more comprehensive assessment of the measurement properties of the DJGLS before fully recommending the scale to assess loneliness in adults.

          Related collections

          Most cited references63

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

          The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Interrater reliability: the kappa statistic

            The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Guidelines for the Process of Cross-Cultural Adaptation of Self-Report Measures

                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                European Journal of Psychological Assessment
                European Journal of Psychological Assessment
                Hogrefe Publishing Group
                1015-5759
                2151-2426
                June 22 2023
                Affiliations
                [1 ]School of Rehabilitation Science, McMaster University, Hamilton, ON, Canada
                [2 ]Department of Respiratory Medicine, West Park Healthcare Centre, Toronto, ON, Canada
                [3 ]Lab3R – Respiratory Research and Rehabilitation Laboratory, School of Health Sciences, University of Aveiro (ESSUA), Portugal
                [4 ]Institute for Biomedicine (iBiMED), University of Aveiro, Portugal
                [5 ]Faculty of Medicine, University of Toronto, ON, Canada
                [6 ]Department of Physical Therapy and Rehabilitation Science, University of Toronto, ON, Canada
                Article
                10.1027/1015-5759/a000784
                e6a9fdfb-f71a-448b-9597-66bd45d382a0
                © 2023
                History

                Comments

                Comment on this article