2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Objective Measures of Surgeon Nontechnical Skills in Surgery: A Scoping Review

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Objective

          The purpose of this study was to identify, synthesize, and discuss objective behavioral or physiological metrics of surgeons’ nontechnical skills (NTS) in the literature.

          Background

          NTS, or interpersonal or cognitive skills, have been identified to contribute to safe and efficient surgical performance; however, current assessments are subjective, checklist-based tools. Intraoperative skill evaluation, such as technical skills, has been previously utilized as an objective measure to address such limitations.

          Methods

          Five databases in engineering, behavioral science, and medicine were searched following PRISMA reporting guidelines. Eligibility criteria included studies with NTS objective measurements, surgeons, and took place within simulated or live operations.

          Results

          Twenty-three articles were included in this review. Objective metrics included communication metrics and measures from physiological responses such as changes in brain activation and motion of the eye. Frequencies of content-coded communication in surgery were utilized in 16 studies and were associated with not only the communication construct but also cognitive constructs of situation awareness and decision making. This indicates the underlying importance of communication in evaluating the NTS constructs. To synthesize the scoped literature, a framework based on the one-way communication model was used to map the objective measures to NTS constructs.

          Conclusion

          Objective NTS measurement of surgeons is still preliminary, and future work on leveraging objective metrics in parallel with current assessment tools is needed.

          Application

          Findings from this work identify objective NTS metrics for measurement applications in a surgical environment.

          Related collections

          Most cited references80

          • Record: found
          • Abstract: found
          • Article: not found

          PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation

          Scoping reviews, a type of knowledge synthesis, follow a systematic approach to map evidence on a topic and identify main concepts, theories, sources, and knowledge gaps. Although more scoping reviews are being done, their methodological and reporting quality need improvement. This document presents the PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews) checklist and explanation. The checklist was developed by a 24-member expert panel and 2 research leads following published guidance from the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network. The final checklist contains 20 essential reporting items and 2 optional items. The authors provide a rationale and an example of good reporting for each item. The intent of the PRISMA-ScR is to help readers (including researchers, publishers, commissioners, policymakers, health care providers, guideline developers, and patients or consumers) develop a greater understanding of relevant terminology, core concepts, and key items to report for scoping reviews.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Interrater reliability: the kappa statistic

            The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers

                Bookmark

                Author and article information

                Contributors
                Journal
                Human Factors: The Journal of the Human Factors and Ergonomics Society
                Hum Factors
                SAGE Publications
                0018-7208
                1547-8181
                March 07 2021
                : 001872082199531
                Affiliations
                [1 ] Purdue University, Indiana, USA
                Article
                10.1177/0018720821995319
                33682476
                00ab156d-be77-45f4-9874-db75e7e5cf4e
                © 2021

                http://journals.sagepub.com/page/policies/text-and-data-mining-license

                History

                Comments

                Comment on this article